Test Report: QEMU_macOS 19575

                    
                      7bfa33b863353ea74c2dd2110cc17945d6c51e0f:2024-09-04:36080
                    
                

Test fail (99/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.69
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.99
33 TestAddons/parallel/Registry 71.29
46 TestCertOptions 10.23
47 TestCertExpiration 195.24
48 TestDockerFlags 10.38
49 TestForceSystemdFlag 10.57
50 TestForceSystemdEnv 10.38
95 TestFunctional/parallel/ServiceCmdConnect 31.48
111 TestFunctional/parallel/License 0.14
167 TestMultiControlPlane/serial/StopSecondaryNode 214.18
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 103.5
169 TestMultiControlPlane/serial/RestartSecondaryNode 209.24
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.4
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 202.08
175 TestMultiControlPlane/serial/RestartCluster 5.26
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 10.01
184 TestJSONOutput/start/Command 9.87
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.05
213 TestMinikubeProfile 10.27
216 TestMountStart/serial/StartWithMountFirst 9.9
219 TestMultiNode/serial/FreshStart2Nodes 9.87
220 TestMultiNode/serial/DeployApp2Nodes 77.48
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.08
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 44.71
228 TestMultiNode/serial/RestartKeepsNodes 8.87
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 3.86
231 TestMultiNode/serial/RestartMultiNode 5.25
232 TestMultiNode/serial/ValidateNameConflict 20.12
236 TestPreload 10.05
238 TestScheduledStopUnix 9.92
239 TestSkaffold 12.77
242 TestRunningBinaryUpgrade 591.8
244 TestKubernetesUpgrade 18.89
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.65
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.29
260 TestStoppedBinaryUpgrade/Upgrade 573.95
262 TestPause/serial/Start 9.87
272 TestNoKubernetes/serial/StartWithK8s 9.87
273 TestNoKubernetes/serial/StartWithStopK8s 5.29
274 TestNoKubernetes/serial/Start 5.31
278 TestNoKubernetes/serial/StartNoArgs 5.28
280 TestNetworkPlugins/group/auto/Start 9.92
281 TestNetworkPlugins/group/calico/Start 9.78
282 TestNetworkPlugins/group/custom-flannel/Start 9.81
283 TestNetworkPlugins/group/false/Start 9.9
284 TestNetworkPlugins/group/kindnet/Start 9.82
285 TestNetworkPlugins/group/flannel/Start 9.85
286 TestNetworkPlugins/group/enable-default-cni/Start 9.9
287 TestNetworkPlugins/group/bridge/Start 9.9
288 TestNetworkPlugins/group/kubenet/Start 9.95
290 TestStartStop/group/old-k8s-version/serial/FirstStart 9.97
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.1
302 TestStartStop/group/no-preload/serial/FirstStart 9.81
303 TestStartStop/group/no-preload/serial/DeployApp 0.09
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
307 TestStartStop/group/no-preload/serial/SecondStart 7.03
309 TestStartStop/group/embed-certs/serial/FirstStart 9.92
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
312 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
313 TestStartStop/group/no-preload/serial/Pause 0.11
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.07
316 TestStartStop/group/embed-certs/serial/DeployApp 0.09
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
321 TestStartStop/group/embed-certs/serial/SecondStart 5.29
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.16
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.88
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
329 TestStartStop/group/embed-certs/serial/Pause 0.1
331 TestStartStop/group/newest-cni/serial/FirstStart 9.94
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
340 TestStartStop/group/newest-cni/serial/SecondStart 5.25
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (14.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-210000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-210000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (14.6898445s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a24affd2-349b-4751-91d0-44b93851acb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-210000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b5e64fe7-3e82-41dd-90d5-40953dffcc89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19575"}}
	{"specversion":"1.0","id":"04cd328e-a617-4268-9982-f8327b712433","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig"}}
	{"specversion":"1.0","id":"a4224007-ee61-4c87-922c-898f4833fb9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"64fba5a3-ffb6-45a8-9f51-15e6a2fd1b9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"adca0654-7fcb-48d1-8c0e-7b26120a0428","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube"}}
	{"specversion":"1.0","id":"a45486c8-4d19-4815-86d0-d8bf7e7fa14c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"9099c2af-9f3a-4ad3-8705-84ff1c79d3d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"48189ff2-a75d-4d5c-bc56-cc86611e8365","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"5a93e79e-c13b-4673-b434-0a44e351d409","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"29e7623c-0978-4683-8e03-b4a1dbc0a041","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-210000\" primary control-plane node in \"download-only-210000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4ee2f20b-e8d0-4f2a-9248-1e05483994b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5cb98d51-d308-4904-99ae-040a76cbc2ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108cc7960 0x108cc7960 0x108cc7960 0x108cc7960 0x108cc7960 0x108cc7960 0x108cc7960] Decompressors:map[bz2:0x14000637d00 gz:0x14000637d08 tar:0x14000637c90 tar.bz2:0x14000637ca0 tar.gz:0x14000637cb0 tar.xz:0x14000637cc0 tar.zst:0x14000637cd0 tbz2:0x14000637ca0 tgz:0x14
000637cb0 txz:0x14000637cc0 tzst:0x14000637cd0 xz:0x14000637d10 zip:0x14000637d20 zst:0x14000637d18] Getters:map[file:0x1400142e550 http:0x140004e62d0 https:0x140004e6320] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"656f904d-a912-42a1-9a3f-68939f1f5d29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 12:24:43.049256    1663 out.go:345] Setting OutFile to fd 1 ...
	I0904 12:24:43.049408    1663 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:24:43.049411    1663 out.go:358] Setting ErrFile to fd 2...
	I0904 12:24:43.049414    1663 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:24:43.049537    1663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	W0904 12:24:43.049620    1663 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19575-1140/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19575-1140/.minikube/config/config.json: no such file or directory
	I0904 12:24:43.050925    1663 out.go:352] Setting JSON to true
	I0904 12:24:43.068724    1663 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1447,"bootTime":1725476436,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 12:24:43.068801    1663 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 12:24:43.077918    1663 out.go:97] [download-only-210000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 12:24:43.078071    1663 notify.go:220] Checking for updates...
	W0904 12:24:43.078110    1663 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball: no such file or directory
	I0904 12:24:43.079153    1663 out.go:169] MINIKUBE_LOCATION=19575
	I0904 12:24:43.081788    1663 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 12:24:43.087890    1663 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 12:24:43.090872    1663 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 12:24:43.093842    1663 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	W0904 12:24:43.099815    1663 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 12:24:43.100062    1663 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 12:24:43.104897    1663 out.go:97] Using the qemu2 driver based on user configuration
	I0904 12:24:43.104918    1663 start.go:297] selected driver: qemu2
	I0904 12:24:43.104934    1663 start.go:901] validating driver "qemu2" against <nil>
	I0904 12:24:43.105000    1663 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 12:24:43.107789    1663 out.go:169] Automatically selected the socket_vmnet network
	I0904 12:24:43.113495    1663 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0904 12:24:43.113589    1663 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 12:24:43.113668    1663 cni.go:84] Creating CNI manager for ""
	I0904 12:24:43.113685    1663 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0904 12:24:43.113739    1663 start.go:340] cluster config:
	{Name:download-only-210000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-210000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0904 12:24:43.118959    1663 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 12:24:43.123821    1663 out.go:97] Downloading VM boot image ...
	I0904 12:24:43.123836    1663 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso
	I0904 12:24:49.582977    1663 out.go:97] Starting "download-only-210000" primary control-plane node in "download-only-210000" cluster
	I0904 12:24:49.583007    1663 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0904 12:24:49.644887    1663 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0904 12:24:49.644896    1663 cache.go:56] Caching tarball of preloaded images
	I0904 12:24:49.645056    1663 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0904 12:24:49.650171    1663 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0904 12:24:49.650178    1663 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0904 12:24:49.738576    1663 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0904 12:24:56.446254    1663 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0904 12:24:56.446421    1663 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0904 12:24:57.142867    1663 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0904 12:24:57.143062    1663 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/download-only-210000/config.json ...
	I0904 12:24:57.143092    1663 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/download-only-210000/config.json: {Name:mk4ad6959e28f3b32d62b1914cf69975ab372a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 12:24:57.143319    1663 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0904 12:24:57.143499    1663 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0904 12:24:57.658995    1663 out.go:193] 
	W0904 12:24:57.665940    1663 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108cc7960 0x108cc7960 0x108cc7960 0x108cc7960 0x108cc7960 0x108cc7960 0x108cc7960] Decompressors:map[bz2:0x14000637d00 gz:0x14000637d08 tar:0x14000637c90 tar.bz2:0x14000637ca0 tar.gz:0x14000637cb0 tar.xz:0x14000637cc0 tar.zst:0x14000637cd0 tbz2:0x14000637ca0 tgz:0x14000637cb0 txz:0x14000637cc0 tzst:0x14000637cd0 xz:0x14000637d10 zip:0x14000637d20 zst:0x14000637d18] Getters:map[file:0x1400142e550 http:0x140004e62d0 https:0x140004e6320] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0904 12:24:57.665968    1663 out_reason.go:110] 
	W0904 12:24:57.676901    1663 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 12:24:57.680826    1663 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-210000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (14.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-414000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-414000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.84401825s)

                                                
                                                
-- stdout --
	* [offline-docker-414000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-414000" primary control-plane node in "offline-docker-414000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-414000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:10:28.830116    4190 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:10:28.830283    4190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:10:28.830287    4190 out.go:358] Setting ErrFile to fd 2...
	I0904 13:10:28.830289    4190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:10:28.830427    4190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:10:28.831767    4190 out.go:352] Setting JSON to false
	I0904 13:10:28.849504    4190 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4192,"bootTime":1725476436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:10:28.849577    4190 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:10:28.854457    4190 out.go:177] * [offline-docker-414000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:10:28.862329    4190 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:10:28.862330    4190 notify.go:220] Checking for updates...
	I0904 13:10:28.868293    4190 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:10:28.871400    4190 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:10:28.874322    4190 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:10:28.877323    4190 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:10:28.880239    4190 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:10:28.883697    4190 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:10:28.883755    4190 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:10:28.887283    4190 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:10:28.894298    4190 start.go:297] selected driver: qemu2
	I0904 13:10:28.894309    4190 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:10:28.894316    4190 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:10:28.896365    4190 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:10:28.899288    4190 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:10:28.902322    4190 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:10:28.902363    4190 cni.go:84] Creating CNI manager for ""
	I0904 13:10:28.902375    4190 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:10:28.902381    4190 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 13:10:28.902424    4190 start.go:340] cluster config:
	{Name:offline-docker-414000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-414000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client So
cketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:10:28.906093    4190 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:10:28.914155    4190 out.go:177] * Starting "offline-docker-414000" primary control-plane node in "offline-docker-414000" cluster
	I0904 13:10:28.918285    4190 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:10:28.918317    4190 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:10:28.918325    4190 cache.go:56] Caching tarball of preloaded images
	I0904 13:10:28.918389    4190 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:10:28.918395    4190 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:10:28.918463    4190 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/offline-docker-414000/config.json ...
	I0904 13:10:28.918475    4190 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/offline-docker-414000/config.json: {Name:mkd850453fe080a4eed1d230b86082dbc07ceec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:10:28.918779    4190 start.go:360] acquireMachinesLock for offline-docker-414000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:10:28.918815    4190 start.go:364] duration metric: took 25.958µs to acquireMachinesLock for "offline-docker-414000"
	I0904 13:10:28.918827    4190 start.go:93] Provisioning new machine with config: &{Name:offline-docker-414000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-dock
er-414000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:10:28.918870    4190 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:10:28.920325    4190 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0904 13:10:28.936155    4190 start.go:159] libmachine.API.Create for "offline-docker-414000" (driver="qemu2")
	I0904 13:10:28.936180    4190 client.go:168] LocalClient.Create starting
	I0904 13:10:28.936255    4190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:10:28.936286    4190 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:28.936296    4190 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:28.936336    4190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:10:28.936365    4190 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:28.936372    4190 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:28.936732    4190 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:10:29.086831    4190 main.go:141] libmachine: Creating SSH key...
	I0904 13:10:29.203932    4190 main.go:141] libmachine: Creating Disk image...
	I0904 13:10:29.203950    4190 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:10:29.204410    4190 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/disk.qcow2
	I0904 13:10:29.214256    4190 main.go:141] libmachine: STDOUT: 
	I0904 13:10:29.214276    4190 main.go:141] libmachine: STDERR: 
	I0904 13:10:29.214332    4190 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/disk.qcow2 +20000M
	I0904 13:10:29.223706    4190 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:10:29.223733    4190 main.go:141] libmachine: STDERR: 
	I0904 13:10:29.223751    4190 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/disk.qcow2
	I0904 13:10:29.223759    4190 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:10:29.223772    4190 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:10:29.223803    4190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:61:c6:71:6b:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/disk.qcow2
	I0904 13:10:29.225595    4190 main.go:141] libmachine: STDOUT: 
	I0904 13:10:29.225612    4190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:10:29.225629    4190 client.go:171] duration metric: took 289.449916ms to LocalClient.Create
	I0904 13:10:31.227767    4190 start.go:128] duration metric: took 2.308916334s to createHost
	I0904 13:10:31.227837    4190 start.go:83] releasing machines lock for "offline-docker-414000", held for 2.309056084s
	W0904 13:10:31.227855    4190 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:10:31.238861    4190 out.go:177] * Deleting "offline-docker-414000" in qemu2 ...
	W0904 13:10:31.253081    4190 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:10:31.253099    4190 start.go:729] Will try again in 5 seconds ...
	I0904 13:10:36.255348    4190 start.go:360] acquireMachinesLock for offline-docker-414000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:10:36.255882    4190 start.go:364] duration metric: took 414.667µs to acquireMachinesLock for "offline-docker-414000"
	I0904 13:10:36.256023    4190 start.go:93] Provisioning new machine with config: &{Name:offline-docker-414000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-dock
er-414000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:10:36.256290    4190 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:10:36.264880    4190 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0904 13:10:36.314691    4190 start.go:159] libmachine.API.Create for "offline-docker-414000" (driver="qemu2")
	I0904 13:10:36.314750    4190 client.go:168] LocalClient.Create starting
	I0904 13:10:36.314858    4190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:10:36.314917    4190 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:36.314934    4190 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:36.314998    4190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:10:36.315042    4190 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:36.315059    4190 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:36.315595    4190 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:10:36.476382    4190 main.go:141] libmachine: Creating SSH key...
	I0904 13:10:36.580443    4190 main.go:141] libmachine: Creating Disk image...
	I0904 13:10:36.580449    4190 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:10:36.580663    4190 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/disk.qcow2
	I0904 13:10:36.589890    4190 main.go:141] libmachine: STDOUT: 
	I0904 13:10:36.589907    4190 main.go:141] libmachine: STDERR: 
	I0904 13:10:36.589957    4190 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/disk.qcow2 +20000M
	I0904 13:10:36.597795    4190 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:10:36.597813    4190 main.go:141] libmachine: STDERR: 
	I0904 13:10:36.597820    4190 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/disk.qcow2
	I0904 13:10:36.597826    4190 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:10:36.597832    4190 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:10:36.597863    4190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:9b:1c:8a:fc:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/offline-docker-414000/disk.qcow2
	I0904 13:10:36.599497    4190 main.go:141] libmachine: STDOUT: 
	I0904 13:10:36.599513    4190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:10:36.599524    4190 client.go:171] duration metric: took 284.773167ms to LocalClient.Create
	I0904 13:10:38.601694    4190 start.go:128] duration metric: took 2.345394375s to createHost
	I0904 13:10:38.601798    4190 start.go:83] releasing machines lock for "offline-docker-414000", held for 2.345922333s
	W0904 13:10:38.602187    4190 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-414000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-414000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:10:38.615781    4190 out.go:201] 
	W0904 13:10:38.618856    4190 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:10:38.618899    4190 out.go:270] * 
	* 
	W0904 13:10:38.621885    4190 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:10:38.630730    4190 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-414000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-09-04 13:10:38.645708 -0700 PDT m=+2755.703776376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-414000 -n offline-docker-414000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-414000 -n offline-docker-414000: exit status 7 (68.439708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-414000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-414000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-414000
--- FAIL: TestOffline (9.99s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.329375ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-cwbqg" [f7b9bfe5-0693-429f-b374-e1fdc2260b34] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010975084s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2qzcv" [0bbedfdb-5af5-493c-82d3-98bada5a51cc] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009667458s
addons_test.go:342: (dbg) Run:  kubectl --context addons-970000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-970000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-970000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.063057875s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-970000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-970000 ip
2024/09/04 12:38:17 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-970000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-970000 -n addons-970000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-970000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-210000 | jenkins | v1.34.0 | 04 Sep 24 12:24 PDT |                     |
	|         | -p download-only-210000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 04 Sep 24 12:24 PDT | 04 Sep 24 12:24 PDT |
	| delete  | -p download-only-210000              | download-only-210000 | jenkins | v1.34.0 | 04 Sep 24 12:24 PDT | 04 Sep 24 12:24 PDT |
	| start   | -o=json --download-only              | download-only-744000 | jenkins | v1.34.0 | 04 Sep 24 12:24 PDT |                     |
	|         | -p download-only-744000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 04 Sep 24 12:25 PDT | 04 Sep 24 12:25 PDT |
	| delete  | -p download-only-744000              | download-only-744000 | jenkins | v1.34.0 | 04 Sep 24 12:25 PDT | 04 Sep 24 12:25 PDT |
	| delete  | -p download-only-210000              | download-only-210000 | jenkins | v1.34.0 | 04 Sep 24 12:25 PDT | 04 Sep 24 12:25 PDT |
	| delete  | -p download-only-744000              | download-only-744000 | jenkins | v1.34.0 | 04 Sep 24 12:25 PDT | 04 Sep 24 12:25 PDT |
	| start   | --download-only -p                   | binary-mirror-359000 | jenkins | v1.34.0 | 04 Sep 24 12:25 PDT |                     |
	|         | binary-mirror-359000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49312               |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-359000              | binary-mirror-359000 | jenkins | v1.34.0 | 04 Sep 24 12:25 PDT | 04 Sep 24 12:25 PDT |
	| addons  | enable dashboard -p                  | addons-970000        | jenkins | v1.34.0 | 04 Sep 24 12:25 PDT |                     |
	|         | addons-970000                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-970000        | jenkins | v1.34.0 | 04 Sep 24 12:25 PDT |                     |
	|         | addons-970000                        |                      |         |         |                     |                     |
	| start   | -p addons-970000 --wait=true         | addons-970000        | jenkins | v1.34.0 | 04 Sep 24 12:25 PDT | 04 Sep 24 12:28 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | addons-970000 addons disable         | addons-970000        | jenkins | v1.34.0 | 04 Sep 24 12:28 PDT | 04 Sep 24 12:29 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-970000 addons                 | addons-970000        | jenkins | v1.34.0 | 04 Sep 24 12:37 PDT | 04 Sep 24 12:38 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-970000 addons                 | addons-970000        | jenkins | v1.34.0 | 04 Sep 24 12:38 PDT | 04 Sep 24 12:38 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-970000 addons                 | addons-970000        | jenkins | v1.34.0 | 04 Sep 24 12:38 PDT | 04 Sep 24 12:38 PDT |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-970000        | jenkins | v1.34.0 | 04 Sep 24 12:38 PDT |                     |
	|         | addons-970000                        |                      |         |         |                     |                     |
	| ip      | addons-970000 ip                     | addons-970000        | jenkins | v1.34.0 | 04 Sep 24 12:38 PDT | 04 Sep 24 12:38 PDT |
	| addons  | addons-970000 addons disable         | addons-970000        | jenkins | v1.34.0 | 04 Sep 24 12:38 PDT | 04 Sep 24 12:38 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/04 12:25:07
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 12:25:07.088561    1740 out.go:345] Setting OutFile to fd 1 ...
	I0904 12:25:07.088694    1740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:25:07.088697    1740 out.go:358] Setting ErrFile to fd 2...
	I0904 12:25:07.088700    1740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:25:07.088835    1740 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 12:25:07.089983    1740 out.go:352] Setting JSON to false
	I0904 12:25:07.105996    1740 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1471,"bootTime":1725476436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 12:25:07.106072    1740 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 12:25:07.109618    1740 out.go:177] * [addons-970000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 12:25:07.116621    1740 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 12:25:07.116687    1740 notify.go:220] Checking for updates...
	I0904 12:25:07.123517    1740 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 12:25:07.126596    1740 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 12:25:07.129552    1740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 12:25:07.132620    1740 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 12:25:07.135570    1740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 12:25:07.137020    1740 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 12:25:07.141558    1740 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 12:25:07.148428    1740 start.go:297] selected driver: qemu2
	I0904 12:25:07.148434    1740 start.go:901] validating driver "qemu2" against <nil>
	I0904 12:25:07.148439    1740 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 12:25:07.150560    1740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 12:25:07.153537    1740 out.go:177] * Automatically selected the socket_vmnet network
	I0904 12:25:07.156710    1740 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 12:25:07.156734    1740 cni.go:84] Creating CNI manager for ""
	I0904 12:25:07.156742    1740 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 12:25:07.156745    1740 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 12:25:07.156785    1740 start.go:340] cluster config:
	{Name:addons-970000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/v
ar/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 12:25:07.160387    1740 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 12:25:07.167540    1740 out.go:177] * Starting "addons-970000" primary control-plane node in "addons-970000" cluster
	I0904 12:25:07.171553    1740 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 12:25:07.171573    1740 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 12:25:07.171581    1740 cache.go:56] Caching tarball of preloaded images
	I0904 12:25:07.171655    1740 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 12:25:07.171661    1740 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 12:25:07.171861    1740 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/config.json ...
	I0904 12:25:07.171873    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/config.json: {Name:mk9711edb1ea9beb0bb5ede3888400fd268bee07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 12:25:07.172263    1740 start.go:360] acquireMachinesLock for addons-970000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 12:25:07.172326    1740 start.go:364] duration metric: took 57.166µs to acquireMachinesLock for "addons-970000"
	I0904 12:25:07.172338    1740 start.go:93] Provisioning new machine with config: &{Name:addons-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-970000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 12:25:07.172371    1740 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 12:25:07.181644    1740 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0904 12:25:07.420030    1740 start.go:159] libmachine.API.Create for "addons-970000" (driver="qemu2")
	I0904 12:25:07.420070    1740 client.go:168] LocalClient.Create starting
	I0904 12:25:07.420288    1740 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 12:25:07.501742    1740 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 12:25:07.566673    1740 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 12:25:08.274797    1740 main.go:141] libmachine: Creating SSH key...
	I0904 12:25:08.436641    1740 main.go:141] libmachine: Creating Disk image...
	I0904 12:25:08.436647    1740 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 12:25:08.436968    1740 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/disk.qcow2
	I0904 12:25:08.453741    1740 main.go:141] libmachine: STDOUT: 
	I0904 12:25:08.453766    1740 main.go:141] libmachine: STDERR: 
	I0904 12:25:08.453826    1740 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/disk.qcow2 +20000M
	I0904 12:25:08.462064    1740 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 12:25:08.462082    1740 main.go:141] libmachine: STDERR: 
	I0904 12:25:08.462096    1740 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/disk.qcow2
	I0904 12:25:08.462101    1740 main.go:141] libmachine: Starting QEMU VM...
	I0904 12:25:08.462144    1740 qemu.go:418] Using hvf for hardware acceleration
	I0904 12:25:08.462176    1740 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:a5:69:d1:b6:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/disk.qcow2
	I0904 12:25:08.520679    1740 main.go:141] libmachine: STDOUT: 
	I0904 12:25:08.520705    1740 main.go:141] libmachine: STDERR: 
	I0904 12:25:08.520709    1740 main.go:141] libmachine: Attempt 0
	I0904 12:25:08.520722    1740 main.go:141] libmachine: Searching for ea:a5:69:d1:b6:d6 in /var/db/dhcpd_leases ...
	I0904 12:25:08.520767    1740 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0904 12:25:08.520789    1740 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66da056d}
	I0904 12:25:10.522907    1740 main.go:141] libmachine: Attempt 1
	I0904 12:25:10.522994    1740 main.go:141] libmachine: Searching for ea:a5:69:d1:b6:d6 in /var/db/dhcpd_leases ...
	I0904 12:25:10.523433    1740 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0904 12:25:10.523482    1740 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66da056d}
	I0904 12:25:12.525646    1740 main.go:141] libmachine: Attempt 2
	I0904 12:25:12.525724    1740 main.go:141] libmachine: Searching for ea:a5:69:d1:b6:d6 in /var/db/dhcpd_leases ...
	I0904 12:25:12.526137    1740 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0904 12:25:12.526187    1740 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66da056d}
	I0904 12:25:14.528318    1740 main.go:141] libmachine: Attempt 3
	I0904 12:25:14.528346    1740 main.go:141] libmachine: Searching for ea:a5:69:d1:b6:d6 in /var/db/dhcpd_leases ...
	I0904 12:25:14.528489    1740 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0904 12:25:14.528506    1740 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66da056d}
	I0904 12:25:16.530510    1740 main.go:141] libmachine: Attempt 4
	I0904 12:25:16.530518    1740 main.go:141] libmachine: Searching for ea:a5:69:d1:b6:d6 in /var/db/dhcpd_leases ...
	I0904 12:25:16.530551    1740 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0904 12:25:16.530558    1740 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66da056d}
	I0904 12:25:18.532548    1740 main.go:141] libmachine: Attempt 5
	I0904 12:25:18.532560    1740 main.go:141] libmachine: Searching for ea:a5:69:d1:b6:d6 in /var/db/dhcpd_leases ...
	I0904 12:25:18.532588    1740 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0904 12:25:18.532593    1740 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66da056d}
	I0904 12:25:20.534169    1740 main.go:141] libmachine: Attempt 6
	I0904 12:25:20.534189    1740 main.go:141] libmachine: Searching for ea:a5:69:d1:b6:d6 in /var/db/dhcpd_leases ...
	I0904 12:25:20.534274    1740 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0904 12:25:20.534284    1740 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66da056d}
	I0904 12:25:22.536311    1740 main.go:141] libmachine: Attempt 7
	I0904 12:25:22.536397    1740 main.go:141] libmachine: Searching for ea:a5:69:d1:b6:d6 in /var/db/dhcpd_leases ...
	I0904 12:25:22.536533    1740 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0904 12:25:22.536548    1740 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ea:a5:69:d1:b6:d6 ID:1,ea:a5:69:d1:b6:d6 Lease:0x66da05a0}
	I0904 12:25:22.536552    1740 main.go:141] libmachine: Found match: ea:a5:69:d1:b6:d6
	I0904 12:25:22.536573    1740 main.go:141] libmachine: IP: 192.168.105.2
	I0904 12:25:22.536579    1740 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0904 12:25:24.556852    1740 machine.go:93] provisionDockerMachine start ...
	I0904 12:25:24.558351    1740 main.go:141] libmachine: Using SSH client type: native
	I0904 12:25:24.558799    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049c45a0] 0x1049c6e00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0904 12:25:24.558813    1740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 12:25:24.632982    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0904 12:25:24.633029    1740 buildroot.go:166] provisioning hostname "addons-970000"
	I0904 12:25:24.633144    1740 main.go:141] libmachine: Using SSH client type: native
	I0904 12:25:24.633399    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049c45a0] 0x1049c6e00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0904 12:25:24.633409    1740 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-970000 && echo "addons-970000" | sudo tee /etc/hostname
	I0904 12:25:24.703051    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-970000
	
	I0904 12:25:24.703139    1740 main.go:141] libmachine: Using SSH client type: native
	I0904 12:25:24.703308    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049c45a0] 0x1049c6e00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0904 12:25:24.703319    1740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-970000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-970000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-970000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 12:25:24.758604    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 12:25:24.758616    1740 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19575-1140/.minikube CaCertPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19575-1140/.minikube}
	I0904 12:25:24.758623    1740 buildroot.go:174] setting up certificates
	I0904 12:25:24.758628    1740 provision.go:84] configureAuth start
	I0904 12:25:24.758632    1740 provision.go:143] copyHostCerts
	I0904 12:25:24.758713    1740 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19575-1140/.minikube/cert.pem (1123 bytes)
	I0904 12:25:24.758961    1740 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19575-1140/.minikube/key.pem (1675 bytes)
	I0904 12:25:24.759086    1740 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.pem (1078 bytes)
	I0904 12:25:24.759178    1740 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca-key.pem org=jenkins.addons-970000 san=[127.0.0.1 192.168.105.2 addons-970000 localhost minikube]
	I0904 12:25:24.838375    1740 provision.go:177] copyRemoteCerts
	I0904 12:25:24.838436    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 12:25:24.838453    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:24.866575    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 12:25:24.875498    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 12:25:24.883622    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 12:25:24.891562    1740 provision.go:87] duration metric: took 132.924125ms to configureAuth
	I0904 12:25:24.891575    1740 buildroot.go:189] setting minikube options for container-runtime
	I0904 12:25:24.891710    1740 config.go:182] Loaded profile config "addons-970000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 12:25:24.891765    1740 main.go:141] libmachine: Using SSH client type: native
	I0904 12:25:24.891858    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049c45a0] 0x1049c6e00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0904 12:25:24.891862    1740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0904 12:25:24.942772    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0904 12:25:24.942779    1740 buildroot.go:70] root file system type: tmpfs
	I0904 12:25:24.942830    1740 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0904 12:25:24.942870    1740 main.go:141] libmachine: Using SSH client type: native
	I0904 12:25:24.942975    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049c45a0] 0x1049c6e00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0904 12:25:24.943010    1740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0904 12:25:24.997218    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0904 12:25:24.997265    1740 main.go:141] libmachine: Using SSH client type: native
	I0904 12:25:24.997385    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049c45a0] 0x1049c6e00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0904 12:25:24.997393    1740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0904 12:25:26.367952    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0904 12:25:26.367964    1740 machine.go:96] duration metric: took 1.811128459s to provisionDockerMachine
	I0904 12:25:26.367971    1740 client.go:171] duration metric: took 18.948370042s to LocalClient.Create
	I0904 12:25:26.367984    1740 start.go:167] duration metric: took 18.948435041s to libmachine.API.Create "addons-970000"
	I0904 12:25:26.367990    1740 start.go:293] postStartSetup for "addons-970000" (driver="qemu2")
	I0904 12:25:26.367996    1740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 12:25:26.368068    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 12:25:26.368079    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:26.398251    1740 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 12:25:26.400752    1740 info.go:137] Remote host: Buildroot 2023.02.9
	I0904 12:25:26.400763    1740 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19575-1140/.minikube/addons for local assets ...
	I0904 12:25:26.400861    1740 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19575-1140/.minikube/files for local assets ...
	I0904 12:25:26.400893    1740 start.go:296] duration metric: took 32.90075ms for postStartSetup
	I0904 12:25:26.401286    1740 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/config.json ...
	I0904 12:25:26.401487    1740 start.go:128] duration metric: took 19.229589375s to createHost
	I0904 12:25:26.401509    1740 main.go:141] libmachine: Using SSH client type: native
	I0904 12:25:26.401600    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049c45a0] 0x1049c6e00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0904 12:25:26.401604    1740 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 12:25:26.450511    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725477926.000556669
	
	I0904 12:25:26.450520    1740 fix.go:216] guest clock: 1725477926.000556669
	I0904 12:25:26.450524    1740 fix.go:229] Guest: 2024-09-04 12:25:26.000556669 -0700 PDT Remote: 2024-09-04 12:25:26.40149 -0700 PDT m=+19.331602793 (delta=-400.933331ms)
	I0904 12:25:26.450536    1740 fix.go:200] guest clock delta is within tolerance: -400.933331ms
	I0904 12:25:26.450538    1740 start.go:83] releasing machines lock for "addons-970000", held for 19.278690125s
	I0904 12:25:26.450842    1740 ssh_runner.go:195] Run: cat /version.json
	I0904 12:25:26.450845    1740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 12:25:26.450849    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:26.450870    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:26.479105    1740 ssh_runner.go:195] Run: systemctl --version
	I0904 12:25:26.528649    1740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 12:25:26.530707    1740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 12:25:26.530739    1740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 12:25:26.536985    1740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0904 12:25:26.536994    1740 start.go:495] detecting cgroup driver to use...
	I0904 12:25:26.537116    1740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 12:25:26.543852    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0904 12:25:26.547223    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0904 12:25:26.550598    1740 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0904 12:25:26.550627    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0904 12:25:26.554256    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 12:25:26.558054    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0904 12:25:26.561842    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 12:25:26.565695    1740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 12:25:26.569615    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0904 12:25:26.573580    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0904 12:25:26.577412    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0904 12:25:26.581184    1740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 12:25:26.584539    1740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 12:25:26.587718    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 12:25:26.671501    1740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0904 12:25:26.678216    1740 start.go:495] detecting cgroup driver to use...
	I0904 12:25:26.678289    1740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0904 12:25:26.686004    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 12:25:26.691756    1740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 12:25:26.701197    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 12:25:26.706827    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 12:25:26.712025    1740 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0904 12:25:26.757534    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 12:25:26.763971    1740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 12:25:26.770447    1740 ssh_runner.go:195] Run: which cri-dockerd
	I0904 12:25:26.771833    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0904 12:25:26.774756    1740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0904 12:25:26.780758    1740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0904 12:25:26.851317    1740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0904 12:25:26.919213    1740 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0904 12:25:26.919275    1740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0904 12:25:26.925343    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 12:25:26.995163    1740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0904 12:25:29.186210    1740 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.191085s)
	I0904 12:25:29.186285    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0904 12:25:29.191897    1740 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0904 12:25:29.198543    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 12:25:29.204129    1740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0904 12:25:29.270304    1740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0904 12:25:29.334654    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 12:25:29.420008    1740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0904 12:25:29.426953    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 12:25:29.432228    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 12:25:29.499556    1740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0904 12:25:29.526439    1740 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0904 12:25:29.526538    1740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0904 12:25:29.529458    1740 start.go:563] Will wait 60s for crictl version
	I0904 12:25:29.529503    1740 ssh_runner.go:195] Run: which crictl
	I0904 12:25:29.531073    1740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 12:25:29.571469    1740 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0904 12:25:29.571539    1740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 12:25:29.588307    1740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 12:25:29.604989    1740 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0904 12:25:29.605071    1740 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0904 12:25:29.606724    1740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 12:25:29.610863    1740 kubeadm.go:883] updating cluster {Name:addons-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-970000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 12:25:29.610908    1740 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 12:25:29.610949    1740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0904 12:25:29.616480    1740 docker.go:685] Got preloaded images: 
	I0904 12:25:29.616491    1740 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0904 12:25:29.616538    1740 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0904 12:25:29.620427    1740 ssh_runner.go:195] Run: which lz4
	I0904 12:25:29.621861    1740 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0904 12:25:29.623402    1740 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0904 12:25:29.623411    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322549298 bytes)
	I0904 12:25:30.865612    1740 docker.go:649] duration metric: took 1.243809917s to copy over tarball
	I0904 12:25:30.865675    1740 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0904 12:25:31.839828    1740 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0904 12:25:31.854301    1740 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0904 12:25:31.858083    1740 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0904 12:25:31.863944    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 12:25:31.942118    1740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0904 12:25:34.563599    1740 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.621529666s)
	I0904 12:25:34.563702    1740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0904 12:25:34.569847    1740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0904 12:25:34.569857    1740 cache_images.go:84] Images are preloaded, skipping loading
	I0904 12:25:34.569862    1740 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.0 docker true true} ...
	I0904 12:25:34.569976    1740 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-970000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-970000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 12:25:34.570035    1740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0904 12:25:34.591129    1740 cni.go:84] Creating CNI manager for ""
	I0904 12:25:34.591142    1740 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 12:25:34.591158    1740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 12:25:34.591169    1740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-970000 NodeName:addons-970000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 12:25:34.591234    1740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-970000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 12:25:34.591300    1740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0904 12:25:34.594805    1740 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 12:25:34.594845    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 12:25:34.598211    1740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0904 12:25:34.604334    1740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 12:25:34.610013    1740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0904 12:25:34.615776    1740 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0904 12:25:34.617037    1740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 12:25:34.621406    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 12:25:34.712569    1740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 12:25:34.721939    1740 certs.go:68] Setting up /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000 for IP: 192.168.105.2
	I0904 12:25:34.721951    1740 certs.go:194] generating shared ca certs ...
	I0904 12:25:34.721960    1740 certs.go:226] acquiring lock for ca certs: {Name:mkd62cc1bdffb2500ac7e662aba46cadabbc6839 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 12:25:34.722156    1740 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.key
	I0904 12:25:34.808034    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt ...
	I0904 12:25:34.808043    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt: {Name:mk53d45244ece136adffef8842a275ab0c42d833 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 12:25:34.808317    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.key ...
	I0904 12:25:34.808321    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.key: {Name:mke7edac2d3202a64d0181979e912cee22824272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 12:25:34.808449    1740 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/proxy-client-ca.key
	I0904 12:25:34.867073    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19575-1140/.minikube/proxy-client-ca.crt ...
	I0904 12:25:34.867076    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/proxy-client-ca.crt: {Name:mka7751235d9c094d2aa594fda90ea48d2140ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 12:25:34.867227    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19575-1140/.minikube/proxy-client-ca.key ...
	I0904 12:25:34.867230    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/proxy-client-ca.key: {Name:mk5fb8d2088bac06c07782d7e5fd3bcaa81a6083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 12:25:34.867348    1740 certs.go:256] generating profile certs ...
	I0904 12:25:34.867386    1740 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.key
	I0904 12:25:34.867394    1740 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt with IP's: []
	I0904 12:25:34.987839    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt ...
	I0904 12:25:34.987842    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: {Name:mkfb6f07008de0843ad96a059a8e97bab8ec6e86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 12:25:34.987980    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.key ...
	I0904 12:25:34.987983    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.key: {Name:mkc096b6c48a4d999341d0d08387a1163a81d7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 12:25:34.988097    1740 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/apiserver.key.26dbc3fd
	I0904 12:25:34.988106    1740 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/apiserver.crt.26dbc3fd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0904 12:25:35.271665    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/apiserver.crt.26dbc3fd ...
	I0904 12:25:35.271678    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/apiserver.crt.26dbc3fd: {Name:mk0bec9045eb77d397cd64c74e66bea6323301bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 12:25:35.271985    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/apiserver.key.26dbc3fd ...
	I0904 12:25:35.271990    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/apiserver.key.26dbc3fd: {Name:mk9ac468829943efea67b28f07c4f59b086e5ce1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 12:25:35.272109    1740 certs.go:381] copying /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/apiserver.crt.26dbc3fd -> /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/apiserver.crt
	I0904 12:25:35.272305    1740 certs.go:385] copying /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/apiserver.key.26dbc3fd -> /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/apiserver.key
	I0904 12:25:35.272405    1740 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/proxy-client.key
	I0904 12:25:35.272420    1740 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/proxy-client.crt with IP's: []
	I0904 12:25:35.325133    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/proxy-client.crt ...
	I0904 12:25:35.325137    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/proxy-client.crt: {Name:mkc539600b206c43e12b9d5131ffa71f98cc082b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 12:25:35.325282    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/proxy-client.key ...
	I0904 12:25:35.325285    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/proxy-client.key: {Name:mkfd54a02eee234ae61a41e66eacad44d6ae9877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 12:25:35.325569    1740 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 12:25:35.325591    1740 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem (1078 bytes)
	I0904 12:25:35.325610    1740 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem (1123 bytes)
	I0904 12:25:35.325634    1740 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/key.pem (1675 bytes)
	I0904 12:25:35.326084    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 12:25:35.334948    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 12:25:35.342866    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 12:25:35.350943    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 12:25:35.364908    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 12:25:35.373907    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0904 12:25:35.385902    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 12:25:35.394076    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 12:25:35.402325    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 12:25:35.410407    1740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 12:25:35.417399    1740 ssh_runner.go:195] Run: openssl version
	I0904 12:25:35.419968    1740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 12:25:35.423531    1740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 12:25:35.425006    1740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0904 12:25:35.425029    1740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 12:25:35.426934    1740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 12:25:35.430564    1740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 12:25:35.431939    1740 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 12:25:35.431982    1740 kubeadm.go:392] StartCluster: {Name:addons-970000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-970000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 12:25:35.432044    1740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0904 12:25:35.439534    1740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 12:25:35.443051    1740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 12:25:35.446325    1740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 12:25:35.449819    1740 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 12:25:35.449826    1740 kubeadm.go:157] found existing configuration files:
	
	I0904 12:25:35.449847    1740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 12:25:35.453182    1740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 12:25:35.453205    1740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 12:25:35.456634    1740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 12:25:35.459978    1740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 12:25:35.460005    1740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 12:25:35.463121    1740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 12:25:35.466296    1740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 12:25:35.466319    1740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 12:25:35.469729    1740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 12:25:35.473266    1740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 12:25:35.473285    1740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 12:25:35.476672    1740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0904 12:25:35.497732    1740 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0904 12:25:35.497775    1740 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 12:25:35.534699    1740 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 12:25:35.534788    1740 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 12:25:35.534834    1740 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 12:25:35.538718    1740 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 12:25:35.551897    1740 out.go:235]   - Generating certificates and keys ...
	I0904 12:25:35.551931    1740 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 12:25:35.551968    1740 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 12:25:35.587721    1740 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 12:25:35.636446    1740 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 12:25:35.772979    1740 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 12:25:35.843151    1740 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 12:25:35.952425    1740 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 12:25:35.952489    1740 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-970000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0904 12:25:36.017585    1740 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 12:25:36.017650    1740 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-970000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0904 12:25:36.112139    1740 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 12:25:36.286312    1740 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 12:25:36.431185    1740 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 12:25:36.431221    1740 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 12:25:36.619526    1740 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 12:25:36.717835    1740 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 12:25:36.827424    1740 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 12:25:36.882719    1740 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 12:25:36.925999    1740 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 12:25:36.926313    1740 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 12:25:36.928662    1740 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 12:25:36.933008    1740 out.go:235]   - Booting up control plane ...
	I0904 12:25:36.933077    1740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 12:25:36.933143    1740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 12:25:36.933187    1740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 12:25:36.938640    1740 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 12:25:36.941236    1740 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 12:25:36.941272    1740 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 12:25:37.018171    1740 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 12:25:37.018274    1740 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 12:25:37.524784    1740 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.995042ms
	I0904 12:25:37.525056    1740 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0904 12:25:40.525497    1740 kubeadm.go:310] [api-check] The API server is healthy after 3.001479876s
	I0904 12:25:40.531547    1740 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 12:25:40.536400    1740 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 12:25:40.544720    1740 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 12:25:40.544818    1740 kubeadm.go:310] [mark-control-plane] Marking the node addons-970000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 12:25:40.547597    1740 kubeadm.go:310] [bootstrap-token] Using token: ci1a9i.urqh8y20hj92jxnq
	I0904 12:25:40.561372    1740 out.go:235]   - Configuring RBAC rules ...
	I0904 12:25:40.561425    1740 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 12:25:40.561468    1740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 12:25:40.562890    1740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 12:25:40.563868    1740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 12:25:40.564992    1740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 12:25:40.565886    1740 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 12:25:40.931759    1740 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 12:25:41.353751    1740 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 12:25:41.928879    1740 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 12:25:41.929619    1740 kubeadm.go:310] 
	I0904 12:25:41.929663    1740 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 12:25:41.929669    1740 kubeadm.go:310] 
	I0904 12:25:41.929733    1740 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 12:25:41.929742    1740 kubeadm.go:310] 
	I0904 12:25:41.929762    1740 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 12:25:41.929830    1740 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 12:25:41.929866    1740 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 12:25:41.929870    1740 kubeadm.go:310] 
	I0904 12:25:41.929905    1740 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 12:25:41.929910    1740 kubeadm.go:310] 
	I0904 12:25:41.929941    1740 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 12:25:41.929944    1740 kubeadm.go:310] 
	I0904 12:25:41.929979    1740 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 12:25:41.930042    1740 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 12:25:41.930103    1740 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 12:25:41.930109    1740 kubeadm.go:310] 
	I0904 12:25:41.930169    1740 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 12:25:41.930226    1740 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 12:25:41.930232    1740 kubeadm.go:310] 
	I0904 12:25:41.930352    1740 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ci1a9i.urqh8y20hj92jxnq \
	I0904 12:25:41.930441    1740 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3feb851b3bc39caa9868530b83b064422b69401534f2eff748003ac6b1086498 \
	I0904 12:25:41.930528    1740 kubeadm.go:310] 	--control-plane 
	I0904 12:25:41.930538    1740 kubeadm.go:310] 
	I0904 12:25:41.930597    1740 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 12:25:41.930606    1740 kubeadm.go:310] 
	I0904 12:25:41.930707    1740 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ci1a9i.urqh8y20hj92jxnq \
	I0904 12:25:41.930776    1740 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3feb851b3bc39caa9868530b83b064422b69401534f2eff748003ac6b1086498 
	I0904 12:25:41.931019    1740 kubeadm.go:310] W0904 19:25:35.046485    1596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0904 12:25:41.931241    1740 kubeadm.go:310] W0904 19:25:35.046959    1596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0904 12:25:41.931343    1740 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 12:25:41.931355    1740 cni.go:84] Creating CNI manager for ""
	I0904 12:25:41.931367    1740 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 12:25:41.936095    1740 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0904 12:25:41.939827    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0904 12:25:41.944247    1740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0904 12:25:41.950435    1740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 12:25:41.950502    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 12:25:41.950513    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-970000 minikube.k8s.io/updated_at=2024_09_04T12_25_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af minikube.k8s.io/name=addons-970000 minikube.k8s.io/primary=true
	I0904 12:25:41.954246    1740 ops.go:34] apiserver oom_adj: -16
	I0904 12:25:42.017212    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 12:25:42.519356    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 12:25:43.019375    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 12:25:43.517411    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 12:25:44.019372    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 12:25:44.519308    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 12:25:45.019296    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 12:25:45.519302    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 12:25:46.019246    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 12:25:46.519383    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 12:25:46.582157    1740 kubeadm.go:1113] duration metric: took 4.6318055s to wait for elevateKubeSystemPrivileges
	I0904 12:25:46.582173    1740 kubeadm.go:394] duration metric: took 11.150471792s to StartCluster
	I0904 12:25:46.582186    1740 settings.go:142] acquiring lock: {Name:mk9e5d70c30d2e6b96e7a9eeb7ab14f5f9a1127e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 12:25:46.582367    1740 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 12:25:46.582602    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/kubeconfig: {Name:mk2a8055a803f1d023c814308503721b85f2130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 12:25:46.582870    1740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 12:25:46.582913    1740 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 12:25:46.582998    1740 config.go:182] Loaded profile config "addons-970000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 12:25:46.583023    1740 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0904 12:25:46.583076    1740 addons.go:69] Setting yakd=true in profile "addons-970000"
	I0904 12:25:46.583089    1740 addons.go:234] Setting addon yakd=true in "addons-970000"
	I0904 12:25:46.583090    1740 addons.go:69] Setting inspektor-gadget=true in profile "addons-970000"
	I0904 12:25:46.583094    1740 addons.go:69] Setting default-storageclass=true in profile "addons-970000"
	I0904 12:25:46.583100    1740 addons.go:234] Setting addon inspektor-gadget=true in "addons-970000"
	I0904 12:25:46.583102    1740 host.go:66] Checking if "addons-970000" exists ...
	I0904 12:25:46.583112    1740 host.go:66] Checking if "addons-970000" exists ...
	I0904 12:25:46.583136    1740 addons.go:69] Setting registry=true in profile "addons-970000"
	I0904 12:25:46.583146    1740 addons.go:234] Setting addon registry=true in "addons-970000"
	I0904 12:25:46.583110    1740 addons.go:69] Setting ingress=true in profile "addons-970000"
	I0904 12:25:46.583161    1740 host.go:66] Checking if "addons-970000" exists ...
	I0904 12:25:46.583169    1740 addons.go:234] Setting addon ingress=true in "addons-970000"
	I0904 12:25:46.583184    1740 host.go:66] Checking if "addons-970000" exists ...
	I0904 12:25:46.583136    1740 addons.go:69] Setting volumesnapshots=true in profile "addons-970000"
	I0904 12:25:46.583208    1740 addons.go:234] Setting addon volumesnapshots=true in "addons-970000"
	I0904 12:25:46.583217    1740 host.go:66] Checking if "addons-970000" exists ...
	I0904 12:25:46.583119    1740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-970000"
	I0904 12:25:46.583446    1740 retry.go:31] will retry after 652.747696ms: connect: dial unix /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/monitor: connect: connection refused
	I0904 12:25:46.583122    1740 addons.go:69] Setting cloud-spanner=true in profile "addons-970000"
	I0904 12:25:46.583487    1740 retry.go:31] will retry after 1.133394846s: connect: dial unix /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/monitor: connect: connection refused
	I0904 12:25:46.583491    1740 addons.go:234] Setting addon cloud-spanner=true in "addons-970000"
	I0904 12:25:46.583503    1740 host.go:66] Checking if "addons-970000" exists ...
	I0904 12:25:46.583515    1740 retry.go:31] will retry after 1.24344709s: connect: dial unix /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/monitor: connect: connection refused
	I0904 12:25:46.583539    1740 retry.go:31] will retry after 504.603169ms: connect: dial unix /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/monitor: connect: connection refused
	I0904 12:25:46.583124    1740 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-970000"
	I0904 12:25:46.583566    1740 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-970000"
	I0904 12:25:46.583574    1740 host.go:66] Checking if "addons-970000" exists ...
	I0904 12:25:46.583126    1740 addons.go:69] Setting storage-provisioner=true in profile "addons-970000"
	I0904 12:25:46.583588    1740 addons.go:234] Setting addon storage-provisioner=true in "addons-970000"
	I0904 12:25:46.583597    1740 host.go:66] Checking if "addons-970000" exists ...
	I0904 12:25:46.583128    1740 addons.go:69] Setting volcano=true in profile "addons-970000"
	I0904 12:25:46.583730    1740 addons.go:234] Setting addon volcano=true in "addons-970000"
	I0904 12:25:46.583739    1740 host.go:66] Checking if "addons-970000" exists ...
	I0904 12:25:46.583130    1740 addons.go:69] Setting metrics-server=true in profile "addons-970000"
	I0904 12:25:46.583837    1740 addons.go:234] Setting addon metrics-server=true in "addons-970000"
	I0904 12:25:46.583844    1740 host.go:66] Checking if "addons-970000" exists ...
	I0904 12:25:46.583903    1740 retry.go:31] will retry after 646.420514ms: connect: dial unix /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/monitor: connect: connection refused
	I0904 12:25:46.583125    1740 addons.go:69] Setting ingress-dns=true in profile "addons-970000"
	I0904 12:25:46.583919    1740 addons.go:234] Setting addon ingress-dns=true in "addons-970000"
	I0904 12:25:46.583931    1740 host.go:66] Checking if "addons-970000" exists ...
	I0904 12:25:46.583133    1740 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-970000"
	I0904 12:25:46.583947    1740 retry.go:31] will retry after 1.1477427s: connect: dial unix /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/monitor: connect: connection refused
	I0904 12:25:46.583966    1740 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-970000"
	I0904 12:25:46.584010    1740 host.go:66] Checking if "addons-970000" exists ...
	I0904 12:25:46.584037    1740 retry.go:31] will retry after 577.424739ms: connect: dial unix /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/monitor: connect: connection refused
	I0904 12:25:46.583968    1740 retry.go:31] will retry after 588.243531ms: connect: dial unix /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/monitor: connect: connection refused
	I0904 12:25:46.583118    1740 addons.go:69] Setting gcp-auth=true in profile "addons-970000"
	I0904 12:25:46.584078    1740 mustload.go:65] Loading cluster: addons-970000
	I0904 12:25:46.583134    1740 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-970000"
	I0904 12:25:46.584117    1740 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-970000"
	I0904 12:25:46.584149    1740 config.go:182] Loaded profile config "addons-970000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 12:25:46.584153    1740 retry.go:31] will retry after 759.600794ms: connect: dial unix /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/monitor: connect: connection refused
	I0904 12:25:46.584241    1740 retry.go:31] will retry after 1.436639621s: connect: dial unix /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/monitor: connect: connection refused
	I0904 12:25:46.584275    1740 retry.go:31] will retry after 1.457284227s: connect: dial unix /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/monitor: connect: connection refused
	I0904 12:25:46.584309    1740 retry.go:31] will retry after 675.505235ms: connect: dial unix /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/monitor: connect: connection refused
	I0904 12:25:46.584489    1740 retry.go:31] will retry after 785.483409ms: connect: dial unix /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/monitor: connect: connection refused
	I0904 12:25:46.587420    1740 out.go:177] * Verifying Kubernetes components...
	I0904 12:25:46.595434    1740 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0904 12:25:46.599387    1740 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0904 12:25:46.599418    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 12:25:46.603435    1740 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0904 12:25:46.603449    1740 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0904 12:25:46.603459    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:46.606329    1740 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0904 12:25:46.606336    1740 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0904 12:25:46.606342    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:46.623774    1740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 12:25:46.701029    1740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 12:25:46.775869    1740 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0904 12:25:46.775884    1740 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0904 12:25:46.788334    1740 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0904 12:25:46.788346    1740 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0904 12:25:46.794422    1740 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0904 12:25:46.794434    1740 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0904 12:25:46.797069    1740 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0904 12:25:46.797076    1740 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0904 12:25:46.802309    1740 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0904 12:25:46.802316    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0904 12:25:46.804168    1740 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0904 12:25:46.804173    1740 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0904 12:25:46.810066    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0904 12:25:46.852374    1740 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0904 12:25:46.853698    1740 node_ready.go:35] waiting up to 6m0s for node "addons-970000" to be "Ready" ...
	I0904 12:25:46.859871    1740 node_ready.go:49] node "addons-970000" has status "Ready":"True"
	I0904 12:25:46.859889    1740 node_ready.go:38] duration metric: took 6.169416ms for node "addons-970000" to be "Ready" ...
	I0904 12:25:46.859893    1740 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0904 12:25:46.867091    1740 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-970000" in "kube-system" namespace to be "Ready" ...
	I0904 12:25:46.867338    1740 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0904 12:25:46.867347    1740 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0904 12:25:46.891411    1740 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0904 12:25:46.891422    1740 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0904 12:25:46.909554    1740 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0904 12:25:46.909566    1740 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0904 12:25:46.916007    1740 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0904 12:25:46.916016    1740 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0904 12:25:46.922260    1740 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0904 12:25:46.922268    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0904 12:25:46.943639    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0904 12:25:47.093464    1740 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0904 12:25:47.097428    1740 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-970000 service yakd-dashboard -n yakd-dashboard
	
	I0904 12:25:47.101386    1740 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0904 12:25:47.111441    1740 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0904 12:25:47.115501    1740 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 12:25:47.115510    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0904 12:25:47.115521    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:47.167383    1740 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0904 12:25:47.171477    1740 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0904 12:25:47.174413    1740 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0904 12:25:47.177840    1740 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0904 12:25:47.177848    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0904 12:25:47.177858    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:47.184365    1740 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0904 12:25:47.190619    1740 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0904 12:25:47.190633    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0904 12:25:47.190645    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:47.205589    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 12:25:47.234393    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0904 12:25:47.238480    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0904 12:25:47.242386    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0904 12:25:47.246421    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0904 12:25:47.250276    1740 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0904 12:25:47.250276    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0904 12:25:47.258386    1740 out.go:177]   - Using image docker.io/registry:2.8.3
	I0904 12:25:47.261350    1740 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0904 12:25:47.261358    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0904 12:25:47.261369    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:47.262384    1740 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-970000"
	I0904 12:25:47.262401    1740 host.go:66] Checking if "addons-970000" exists ...
	I0904 12:25:47.265398    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0904 12:25:47.265687    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0904 12:25:47.269399    1740 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0904 12:25:47.277398    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0904 12:25:47.277405    1740 out.go:177]   - Using image docker.io/busybox:stable
	I0904 12:25:47.281456    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0904 12:25:47.281496    1740 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 12:25:47.281504    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0904 12:25:47.281517    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:47.287365    1740 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0904 12:25:47.287380    1740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0904 12:25:47.287391    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:47.347378    1740 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0904 12:25:47.351427    1740 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 12:25:47.351438    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0904 12:25:47.351449    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:47.351816    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0904 12:25:47.355770    1740 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-970000" context rescaled to 1 replicas
	I0904 12:25:47.374392    1740 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0904 12:25:47.378450    1740 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 12:25:47.378461    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0904 12:25:47.378472    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:47.379357    1740 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0904 12:25:47.379364    1740 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0904 12:25:47.433290    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 12:25:47.438855    1740 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0904 12:25:47.438865    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0904 12:25:47.467876    1740 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0904 12:25:47.467888    1740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0904 12:25:47.537455    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 12:25:47.537455    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0904 12:25:47.577695    1740 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0904 12:25:47.577709    1740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0904 12:25:47.644266    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 12:25:47.722899    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0904 12:25:47.728864    1740 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0904 12:25:47.728880    1740 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0904 12:25:47.728896    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:47.736756    1740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 12:25:47.742952    1740 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 12:25:47.742963    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 12:25:47.742975    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:47.759137    1740 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0904 12:25:47.759149    1740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0904 12:25:47.818707    1740 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0904 12:25:47.818719    1740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0904 12:25:47.829926    1740 addons.go:234] Setting addon default-storageclass=true in "addons-970000"
	I0904 12:25:47.829948    1740 host.go:66] Checking if "addons-970000" exists ...
	I0904 12:25:47.831283    1740 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 12:25:47.831300    1740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 12:25:47.831308    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:47.881592    1740 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0904 12:25:47.881607    1740 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0904 12:25:47.933032    1740 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0904 12:25:47.933044    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0904 12:25:47.958975    1740 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0904 12:25:47.958988    1740 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0904 12:25:47.967480    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 12:25:48.026514    1740 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0904 12:25:48.030553    1740 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0904 12:25:48.030564    1740 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0904 12:25:48.030574    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:48.030962    1740 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0904 12:25:48.030969    1740 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0904 12:25:48.039563    1740 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0904 12:25:48.039576    1740 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0904 12:25:48.042220    1740 host.go:66] Checking if "addons-970000" exists ...
	I0904 12:25:48.069181    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 12:25:48.085350    1740 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0904 12:25:48.085363    1740 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0904 12:25:48.105762    1740 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0904 12:25:48.105772    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0904 12:25:48.131617    1740 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0904 12:25:48.131630    1740 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0904 12:25:48.176958    1740 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0904 12:25:48.176968    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0904 12:25:48.197083    1740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0904 12:25:48.197093    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0904 12:25:48.201401    1740 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 12:25:48.201408    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0904 12:25:48.273399    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 12:25:48.317759    1740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0904 12:25:48.317771    1740 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0904 12:25:48.358671    1740 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 12:25:48.358686    1740 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0904 12:25:48.425439    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 12:25:48.495729    1740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 12:25:48.495743    1740 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0904 12:25:48.560335    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 12:25:48.882111    1740 pod_ready.go:103] pod "etcd-addons-970000" in "kube-system" namespace has status "Ready":"False"
	I0904 12:25:50.909910    1740 pod_ready.go:103] pod "etcd-addons-970000" in "kube-system" namespace has status "Ready":"False"
	I0904 12:25:51.094501    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.88899325s)
	I0904 12:25:51.094523    1740 addons.go:475] Verifying addon ingress=true in "addons-970000"
	I0904 12:25:51.102292    1740 out.go:177] * Verifying ingress addon...
	I0904 12:25:51.110681    1740 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0904 12:25:51.113244    1740 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0904 12:25:51.281437    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (4.015836209s)
	I0904 12:25:51.281522    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.929793458s)
	I0904 12:25:51.281583    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.848379584s)
	I0904 12:25:51.281630    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.744238375s)
	I0904 12:25:51.281639    1740 addons.go:475] Verifying addon registry=true in "addons-970000"
	I0904 12:25:51.281651    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.74427475s)
	I0904 12:25:51.281762    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.314354416s)
	I0904 12:25:51.281779    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.212667459s)
	I0904 12:25:51.281753    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.637517s)
	I0904 12:25:51.281822    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.008482334s)
	W0904 12:25:51.281835    1740 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 12:25:51.281847    1740 retry.go:31] will retry after 336.458502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 12:25:51.285609    1740 out.go:177] * Verifying registry addon...
	I0904 12:25:51.294242    1740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0904 12:25:51.306837    1740 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 12:25:51.306847    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0904 12:25:51.307019    1740 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0904 12:25:51.618530    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 12:25:51.640773    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.215389417s)
	I0904 12:25:51.640793    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.080521292s)
	I0904 12:25:51.640801    1740 addons.go:475] Verifying addon metrics-server=true in "addons-970000"
	I0904 12:25:51.640794    1740 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-970000"
	I0904 12:25:51.644853    1740 out.go:177] * Verifying csi-hostpath-driver addon...
	I0904 12:25:51.655207    1740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0904 12:25:51.671343    1740 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 12:25:51.671354    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:51.798092    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:25:52.159770    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:52.302474    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:25:52.735577    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:52.815156    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:25:53.160948    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:53.299141    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:25:53.372809    1740 pod_ready.go:103] pod "etcd-addons-970000" in "kube-system" namespace has status "Ready":"False"
	I0904 12:25:53.712259    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:53.796640    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:25:54.033202    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.414710375s)
	I0904 12:25:54.157577    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:54.298089    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:25:54.716174    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:54.797902    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:25:55.159613    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:55.297053    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:25:55.717347    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:55.798008    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:25:55.871363    1740 pod_ready.go:93] pod "etcd-addons-970000" in "kube-system" namespace has status "Ready":"True"
	I0904 12:25:55.871371    1740 pod_ready.go:82] duration metric: took 9.004493208s for pod "etcd-addons-970000" in "kube-system" namespace to be "Ready" ...
	I0904 12:25:55.871376    1740 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-970000" in "kube-system" namespace to be "Ready" ...
	I0904 12:25:55.873188    1740 pod_ready.go:93] pod "kube-apiserver-addons-970000" in "kube-system" namespace has status "Ready":"True"
	I0904 12:25:55.873197    1740 pod_ready.go:82] duration metric: took 1.818ms for pod "kube-apiserver-addons-970000" in "kube-system" namespace to be "Ready" ...
	I0904 12:25:55.873201    1740 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-970000" in "kube-system" namespace to be "Ready" ...
	I0904 12:25:55.875539    1740 pod_ready.go:93] pod "kube-controller-manager-addons-970000" in "kube-system" namespace has status "Ready":"True"
	I0904 12:25:55.875545    1740 pod_ready.go:82] duration metric: took 2.340833ms for pod "kube-controller-manager-addons-970000" in "kube-system" namespace to be "Ready" ...
	I0904 12:25:55.875549    1740 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fxmzp" in "kube-system" namespace to be "Ready" ...
	I0904 12:25:55.877823    1740 pod_ready.go:93] pod "kube-proxy-fxmzp" in "kube-system" namespace has status "Ready":"True"
	I0904 12:25:55.877829    1740 pod_ready.go:82] duration metric: took 2.2765ms for pod "kube-proxy-fxmzp" in "kube-system" namespace to be "Ready" ...
	I0904 12:25:55.877832    1740 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-970000" in "kube-system" namespace to be "Ready" ...
	I0904 12:25:55.879864    1740 pod_ready.go:93] pod "kube-scheduler-addons-970000" in "kube-system" namespace has status "Ready":"True"
	I0904 12:25:55.879869    1740 pod_ready.go:82] duration metric: took 2.032833ms for pod "kube-scheduler-addons-970000" in "kube-system" namespace to be "Ready" ...
	I0904 12:25:55.879871    1740 pod_ready.go:39] duration metric: took 9.020199208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0904 12:25:55.879881    1740 api_server.go:52] waiting for apiserver process to appear ...
	I0904 12:25:55.879942    1740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 12:25:55.886906    1740 api_server.go:72] duration metric: took 9.304212292s to wait for apiserver process to appear ...
	I0904 12:25:55.886914    1740 api_server.go:88] waiting for apiserver healthz status ...
	I0904 12:25:55.886921    1740 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0904 12:25:55.889884    1740 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0904 12:25:55.890396    1740 api_server.go:141] control plane version: v1.31.0
	I0904 12:25:55.890403    1740 api_server.go:131] duration metric: took 3.486542ms to wait for apiserver health ...
	I0904 12:25:55.890406    1740 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 12:25:56.076054    1740 system_pods.go:59] 17 kube-system pods found
	I0904 12:25:56.076067    1740 system_pods.go:61] "coredns-6f6b679f8f-9f28x" [50c3900b-01ba-4e8a-adf8-a8433af58bdc] Running
	I0904 12:25:56.076071    1740 system_pods.go:61] "csi-hostpath-attacher-0" [4fbd4ace-a70d-4fc7-970a-6c16b083de57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 12:25:56.076075    1740 system_pods.go:61] "csi-hostpath-resizer-0" [d39d1f5a-c2c0-48c2-971d-71dc8531883a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 12:25:56.076081    1740 system_pods.go:61] "csi-hostpathplugin-bgfq2" [4f0503ad-770e-4299-ab1f-e18956492639] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 12:25:56.076085    1740 system_pods.go:61] "etcd-addons-970000" [61205449-44b1-4774-8144-8ebe9a5d1973] Running
	I0904 12:25:56.076087    1740 system_pods.go:61] "kube-apiserver-addons-970000" [82a2f723-2d1d-4afa-a519-7d3e0dcda9c2] Running
	I0904 12:25:56.076089    1740 system_pods.go:61] "kube-controller-manager-addons-970000" [9fbb4ced-189d-44fe-844c-d4f181c64d79] Running
	I0904 12:25:56.076092    1740 system_pods.go:61] "kube-ingress-dns-minikube" [1cb8440d-4a7b-433f-9a4d-9a58171d7ab2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 12:25:56.076095    1740 system_pods.go:61] "kube-proxy-fxmzp" [d1861af4-2292-49ce-a065-b4db97da1192] Running
	I0904 12:25:56.076096    1740 system_pods.go:61] "kube-scheduler-addons-970000" [77447c35-d662-434e-be3a-66c98ebbdd41] Running
	I0904 12:25:56.076099    1740 system_pods.go:61] "metrics-server-84c5f94fbc-b5mqw" [e8ef34f0-6061-446a-99f1-31ce7bcb791c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 12:25:56.076102    1740 system_pods.go:61] "nvidia-device-plugin-daemonset-4rl79" [e1603688-ffe4-4f9b-bdad-e827397c39d5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 12:25:56.076105    1740 system_pods.go:61] "registry-6fb4cdfc84-cwbqg" [f7b9bfe5-0693-429f-b374-e1fdc2260b34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 12:25:56.076107    1740 system_pods.go:61] "registry-proxy-2qzcv" [0bbedfdb-5af5-493c-82d3-98bada5a51cc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 12:25:56.076110    1740 system_pods.go:61] "snapshot-controller-56fcc65765-m2gwf" [4983e436-349f-4583-8caa-d3fadfcfdf78] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 12:25:56.076113    1740 system_pods.go:61] "snapshot-controller-56fcc65765-pq2qv" [f9a56a36-7dc1-450b-8652-62f29a493288] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 12:25:56.076114    1740 system_pods.go:61] "storage-provisioner" [531b806b-770d-45d4-887a-a742bc870252] Running
	I0904 12:25:56.076118    1740 system_pods.go:74] duration metric: took 185.700625ms to wait for pod list to return data ...
	I0904 12:25:56.076122    1740 default_sa.go:34] waiting for default service account to be created ...
	I0904 12:25:56.157274    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:56.248186    1740 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0904 12:25:56.248201    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:56.272293    1740 default_sa.go:45] found service account: "default"
	I0904 12:25:56.272302    1740 default_sa.go:55] duration metric: took 196.182167ms for default service account to be created ...
	I0904 12:25:56.272306    1740 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 12:25:56.282785    1740 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0904 12:25:56.289807    1740 addons.go:234] Setting addon gcp-auth=true in "addons-970000"
	I0904 12:25:56.289828    1740 host.go:66] Checking if "addons-970000" exists ...
	I0904 12:25:56.290548    1740 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0904 12:25:56.290557    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/addons-970000/id_rsa Username:docker}
	I0904 12:25:56.296069    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:25:56.321060    1740 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0904 12:25:56.325065    1740 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0904 12:25:56.329921    1740 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0904 12:25:56.329928    1740 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0904 12:25:56.336248    1740 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0904 12:25:56.336256    1740 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0904 12:25:56.343332    1740 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 12:25:56.343339    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0904 12:25:56.351668    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 12:25:56.475780    1740 system_pods.go:86] 17 kube-system pods found
	I0904 12:25:56.475792    1740 system_pods.go:89] "coredns-6f6b679f8f-9f28x" [50c3900b-01ba-4e8a-adf8-a8433af58bdc] Running
	I0904 12:25:56.475797    1740 system_pods.go:89] "csi-hostpath-attacher-0" [4fbd4ace-a70d-4fc7-970a-6c16b083de57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 12:25:56.475800    1740 system_pods.go:89] "csi-hostpath-resizer-0" [d39d1f5a-c2c0-48c2-971d-71dc8531883a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 12:25:56.475804    1740 system_pods.go:89] "csi-hostpathplugin-bgfq2" [4f0503ad-770e-4299-ab1f-e18956492639] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 12:25:56.475806    1740 system_pods.go:89] "etcd-addons-970000" [61205449-44b1-4774-8144-8ebe9a5d1973] Running
	I0904 12:25:56.475811    1740 system_pods.go:89] "kube-apiserver-addons-970000" [82a2f723-2d1d-4afa-a519-7d3e0dcda9c2] Running
	I0904 12:25:56.475813    1740 system_pods.go:89] "kube-controller-manager-addons-970000" [9fbb4ced-189d-44fe-844c-d4f181c64d79] Running
	I0904 12:25:56.475822    1740 system_pods.go:89] "kube-ingress-dns-minikube" [1cb8440d-4a7b-433f-9a4d-9a58171d7ab2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 12:25:56.475824    1740 system_pods.go:89] "kube-proxy-fxmzp" [d1861af4-2292-49ce-a065-b4db97da1192] Running
	I0904 12:25:56.475826    1740 system_pods.go:89] "kube-scheduler-addons-970000" [77447c35-d662-434e-be3a-66c98ebbdd41] Running
	I0904 12:25:56.475828    1740 system_pods.go:89] "metrics-server-84c5f94fbc-b5mqw" [e8ef34f0-6061-446a-99f1-31ce7bcb791c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 12:25:56.475832    1740 system_pods.go:89] "nvidia-device-plugin-daemonset-4rl79" [e1603688-ffe4-4f9b-bdad-e827397c39d5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 12:25:56.475835    1740 system_pods.go:89] "registry-6fb4cdfc84-cwbqg" [f7b9bfe5-0693-429f-b374-e1fdc2260b34] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 12:25:56.475838    1740 system_pods.go:89] "registry-proxy-2qzcv" [0bbedfdb-5af5-493c-82d3-98bada5a51cc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 12:25:56.475841    1740 system_pods.go:89] "snapshot-controller-56fcc65765-m2gwf" [4983e436-349f-4583-8caa-d3fadfcfdf78] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 12:25:56.475844    1740 system_pods.go:89] "snapshot-controller-56fcc65765-pq2qv" [f9a56a36-7dc1-450b-8652-62f29a493288] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 12:25:56.475846    1740 system_pods.go:89] "storage-provisioner" [531b806b-770d-45d4-887a-a742bc870252] Running
	I0904 12:25:56.475849    1740 system_pods.go:126] duration metric: took 203.545625ms to wait for k8s-apps to be running ...
	I0904 12:25:56.475853    1740 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 12:25:56.475911    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 12:25:56.578773    1740 system_svc.go:56] duration metric: took 102.916333ms WaitForService to wait for kubelet
	I0904 12:25:56.578789    1740 kubeadm.go:582] duration metric: took 9.996113167s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 12:25:56.578801    1740 node_conditions.go:102] verifying NodePressure condition ...
	I0904 12:25:56.579787    1740 addons.go:475] Verifying addon gcp-auth=true in "addons-970000"
	I0904 12:25:56.586910    1740 out.go:177] * Verifying gcp-auth addon...
	I0904 12:25:56.593421    1740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0904 12:25:56.594694    1740 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0904 12:25:56.672112    1740 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 12:25:56.672121    1740 node_conditions.go:123] node cpu capacity is 2
	I0904 12:25:56.672127    1740 node_conditions.go:105] duration metric: took 93.325334ms to run NodePressure ...
	I0904 12:25:56.672133    1740 start.go:241] waiting for startup goroutines ...
	I0904 12:25:56.696863    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:56.798101    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:25:57.159067    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:57.297987    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:25:57.659163    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:57.797901    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:25:58.198860    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:58.429562    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:25:58.659387    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:58.797844    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:25:59.158063    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:59.297162    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:25:59.659033    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:25:59.797732    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:00.157935    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:00.297807    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:00.659131    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:00.798580    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:01.159071    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:01.297843    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:01.697113    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:01.797846    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:02.199218    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:02.297490    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:02.659818    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:02.798428    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:03.161509    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:03.297977    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:03.698872    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:03.797668    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:04.158626    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:04.297812    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:04.660050    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:04.797005    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:05.159744    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:05.297875    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:05.659021    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:05.797553    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:06.198731    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:06.298069    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:06.659271    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:06.797683    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:07.158914    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:07.296589    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:07.658890    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:07.797811    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:08.161543    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:08.298582    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 12:26:08.659832    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:08.798629    1740 kapi.go:107] duration metric: took 17.504353166s to wait for kubernetes.io/minikube-addons=registry ...
	I0904 12:26:09.159651    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:09.658231    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:10.160226    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:10.660377    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:11.160227    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:11.752537    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:12.159376    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:12.659400    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:13.162045    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:13.659575    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:14.161408    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:14.660318    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:15.161961    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:15.661628    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:16.162021    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:16.662149    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:17.162123    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:17.662489    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:18.202575    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:18.662789    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:19.165453    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:19.673395    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:20.166711    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:20.664095    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:21.163893    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:21.663097    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:22.202204    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:22.702166    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:23.164942    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:23.663133    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:24.163776    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:24.700670    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:25.163308    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:25.663896    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:26.164345    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:26.708466    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:27.163734    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:27.720830    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:28.162051    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:28.664249    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:29.165529    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:29.664114    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:30.163193    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:30.664169    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:31.162545    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:31.664998    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:32.164203    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:32.663030    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:33.165181    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:33.663182    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:34.165133    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:34.662826    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:35.164693    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:35.666763    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:36.168268    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:36.665114    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:37.164674    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:37.662946    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:38.166440    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:38.665415    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:39.163181    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:39.663943    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:40.165887    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:40.702645    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:41.163460    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:41.664445    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:42.163938    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:42.704693    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:43.164290    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:43.664628    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:44.165047    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:44.703686    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:45.163846    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:45.665001    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:46.164764    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:46.706346    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 12:26:47.170361    1740 kapi.go:107] duration metric: took 55.509727625s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0904 12:27:13.118315    1740 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0904 12:27:13.118326    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:13.620098    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:14.122944    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:14.619412    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:15.122519    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:15.620329    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:16.124217    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:16.622605    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:17.122274    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:17.620183    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:18.122420    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:18.603159    1740 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0904 12:27:18.603169    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:18.625239    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:19.102134    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:19.119082    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:19.602445    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:19.619198    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:20.107436    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:20.121750    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:20.607156    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:20.622343    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:21.109148    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:21.121797    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:21.602333    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:21.618593    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:22.102448    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:22.119696    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:22.605436    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:22.622321    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:23.103574    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:23.120076    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:23.602504    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:23.619469    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:24.103231    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:24.119809    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:24.602912    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:24.619997    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:25.106364    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:25.121578    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:25.603493    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:25.620217    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:26.109385    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:26.125313    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:26.602960    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:26.620017    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:27.101439    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:27.119479    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:27.602533    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:27.619242    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:28.107876    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:28.122096    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:28.603566    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:28.620859    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:29.107187    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:29.122486    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:29.604166    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:29.620118    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:30.107095    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:30.121096    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:30.603867    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:30.620214    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:31.104828    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:31.121900    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:31.602362    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:31.618844    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:32.102676    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:32.120077    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:32.604076    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:32.620626    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:33.103635    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:33.119348    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:33.603097    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:33.620185    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:34.108188    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:34.122662    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:34.604732    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:34.620477    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:35.107621    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:35.120500    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:35.606349    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:35.622985    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:36.108867    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:36.122018    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:36.605247    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:36.622482    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:37.105901    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:37.121445    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:37.601865    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:37.619689    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:38.103650    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:38.120050    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:38.603878    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:38.619964    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:39.104710    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:39.121730    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:39.606456    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:39.621492    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:40.107427    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:40.122782    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:40.610865    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:40.624592    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:41.108266    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:41.122543    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:41.602121    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:41.618591    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:42.104069    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:42.121032    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:42.605974    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:42.621330    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:43.109781    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:43.122295    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:43.603377    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:43.619717    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:44.104273    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:44.120443    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:44.603441    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:44.619836    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:45.106401    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:45.121880    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:45.602010    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:45.619713    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:46.103312    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:46.119760    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:46.609660    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:46.623204    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:47.109376    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:47.122177    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:47.606732    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:47.621334    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:48.104080    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:48.119416    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:48.605870    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:48.621046    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:49.106368    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:49.120715    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:49.606869    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:49.622394    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:50.106105    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:50.120562    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:50.603521    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:50.620748    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:51.103811    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:51.120324    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:51.602016    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:51.618637    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:52.101373    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:52.118916    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:52.604764    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:52.621254    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:53.104974    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:53.121685    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:53.606732    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:53.621688    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:54.104835    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:54.127865    1740 kapi.go:86] Found 2 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0904 12:27:54.127877    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:54.601660    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:54.618905    1740 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0904 12:27:54.618913    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:55.102062    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:55.118938    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:55.601687    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:55.618949    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:56.101168    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:56.118278    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:56.601438    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:56.618648    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:57.101397    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:57.119169    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:57.603586    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:57.621871    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:58.103065    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:58.120279    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:58.602885    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:58.620418    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:59.102684    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:59.119127    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:27:59.602934    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:27:59.619429    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:00.101921    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:00.120950    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:00.602575    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:00.619601    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:01.105576    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:01.204015    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:01.601593    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:01.619327    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:02.101663    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:02.119683    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:02.602180    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:02.619492    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:03.101790    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:03.119754    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:03.601713    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:03.618699    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:04.103437    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:04.120712    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:04.602681    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:04.620306    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:05.102728    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:05.121006    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:05.602652    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:05.620005    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:06.108303    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:06.124921    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:06.606239    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:06.622283    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:07.107038    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:07.123460    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:07.604043    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:07.621873    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:08.102947    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:08.120935    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:08.602339    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:08.620279    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:09.103072    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:09.120928    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:09.601869    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:09.619215    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:10.103314    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:10.120871    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:10.604882    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:10.620356    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:11.102639    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:11.120510    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:11.601739    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:11.618584    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:12.101217    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:12.120710    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:12.607626    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:12.624340    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:13.105979    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:13.123580    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:13.601456    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:13.619763    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:14.102726    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:14.120854    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:14.601519    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:14.702610    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:15.101937    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:15.121008    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:15.606679    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:15.618544    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:16.101695    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:16.118672    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:16.599976    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:16.619451    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:17.100416    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:17.118730    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:17.601183    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:17.619088    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:18.100990    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:18.116768    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:18.601124    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:18.617033    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:19.100974    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:19.118166    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:19.601316    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:19.619002    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:20.100869    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:20.118534    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:20.600774    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:20.617981    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:21.100858    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:21.118380    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:21.600920    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:21.618645    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:22.100221    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:22.117475    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:22.600539    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:22.618352    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:23.101188    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:23.118489    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:23.601155    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:23.619321    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:24.100841    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:24.201745    1740 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 12:28:24.599401    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:24.618273    1740 kapi.go:107] duration metric: took 2m33.503846208s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0904 12:28:25.101096    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:25.604606    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:26.100994    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:26.601048    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:27.100232    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 12:28:27.601239    1740 kapi.go:107] duration metric: took 2m31.003998625s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0904 12:28:27.606257    1740 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-970000 cluster.
	I0904 12:28:27.611246    1740 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0904 12:28:27.616575    1740 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0904 12:28:27.621256    1740 out.go:177] * Enabled addons: yakd, inspektor-gadget, volcano, cloud-spanner, ingress-dns, storage-provisioner, nvidia-device-plugin, storage-provisioner-rancher, metrics-server, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0904 12:28:27.625328    1740 addons.go:510] duration metric: took 2m41.038833916s for enable addons: enabled=[yakd inspektor-gadget volcano cloud-spanner ingress-dns storage-provisioner nvidia-device-plugin storage-provisioner-rancher metrics-server volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0904 12:28:27.625345    1740 start.go:246] waiting for cluster config update ...
	I0904 12:28:27.625356    1740 start.go:255] writing updated cluster config ...
	I0904 12:28:27.625803    1740 ssh_runner.go:195] Run: rm -f paused
	I0904 12:28:27.781192    1740 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0904 12:28:27.785302    1740 out.go:201] 
	W0904 12:28:27.789228    1740 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0904 12:28:27.793236    1740 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0904 12:28:27.798252    1740 out.go:177] * Done! kubectl is now configured to use "addons-970000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 04 19:38:10 addons-970000 dockerd[1286]: time="2024-09-04T19:38:10.456062929Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 04 19:38:14 addons-970000 dockerd[1279]: time="2024-09-04T19:38:14.519235107Z" level=info msg="ignoring event" container=5eaf51d7cc055754e4a85e4dbaa7f676d00792d0cdba32f01eecade30c130ec6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 04 19:38:14 addons-970000 dockerd[1286]: time="2024-09-04T19:38:14.519624096Z" level=info msg="shim disconnected" id=5eaf51d7cc055754e4a85e4dbaa7f676d00792d0cdba32f01eecade30c130ec6 namespace=moby
	Sep 04 19:38:14 addons-970000 dockerd[1286]: time="2024-09-04T19:38:14.519746468Z" level=warning msg="cleaning up after shim disconnected" id=5eaf51d7cc055754e4a85e4dbaa7f676d00792d0cdba32f01eecade30c130ec6 namespace=moby
	Sep 04 19:38:14 addons-970000 dockerd[1286]: time="2024-09-04T19:38:14.519751092Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 04 19:38:17 addons-970000 dockerd[1279]: time="2024-09-04T19:38:17.802860898Z" level=info msg="ignoring event" container=4a5817d8588aed66111a612b421f22bf922173dc5c2b5de6c15a714db38e22de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 04 19:38:17 addons-970000 dockerd[1286]: time="2024-09-04T19:38:17.803126974Z" level=info msg="shim disconnected" id=4a5817d8588aed66111a612b421f22bf922173dc5c2b5de6c15a714db38e22de namespace=moby
	Sep 04 19:38:17 addons-970000 dockerd[1286]: time="2024-09-04T19:38:17.803160598Z" level=warning msg="cleaning up after shim disconnected" id=4a5817d8588aed66111a612b421f22bf922173dc5c2b5de6c15a714db38e22de namespace=moby
	Sep 04 19:38:17 addons-970000 dockerd[1286]: time="2024-09-04T19:38:17.803164848Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 04 19:38:17 addons-970000 dockerd[1279]: time="2024-09-04T19:38:17.941388063Z" level=info msg="ignoring event" container=395ef8a0af1a21c06c5393eb22be47cbed0aa155bf4e04f3adfc104f85c39d44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 04 19:38:17 addons-970000 dockerd[1286]: time="2024-09-04T19:38:17.941574392Z" level=info msg="shim disconnected" id=395ef8a0af1a21c06c5393eb22be47cbed0aa155bf4e04f3adfc104f85c39d44 namespace=moby
	Sep 04 19:38:17 addons-970000 dockerd[1286]: time="2024-09-04T19:38:17.941607224Z" level=warning msg="cleaning up after shim disconnected" id=395ef8a0af1a21c06c5393eb22be47cbed0aa155bf4e04f3adfc104f85c39d44 namespace=moby
	Sep 04 19:38:17 addons-970000 dockerd[1286]: time="2024-09-04T19:38:17.941611974Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 04 19:38:17 addons-970000 dockerd[1286]: time="2024-09-04T19:38:17.969989572Z" level=info msg="shim disconnected" id=46dcdf90bb9cb08fa5466ca78613c652e76e8117bac870e2d88e977ab4595840 namespace=moby
	Sep 04 19:38:17 addons-970000 dockerd[1286]: time="2024-09-04T19:38:17.970043112Z" level=warning msg="cleaning up after shim disconnected" id=46dcdf90bb9cb08fa5466ca78613c652e76e8117bac870e2d88e977ab4595840 namespace=moby
	Sep 04 19:38:17 addons-970000 dockerd[1286]: time="2024-09-04T19:38:17.970053945Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 04 19:38:17 addons-970000 dockerd[1279]: time="2024-09-04T19:38:17.971765690Z" level=info msg="ignoring event" container=46dcdf90bb9cb08fa5466ca78613c652e76e8117bac870e2d88e977ab4595840 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 04 19:38:18 addons-970000 dockerd[1279]: time="2024-09-04T19:38:18.014780122Z" level=info msg="ignoring event" container=4e4ae6c015a4d1c2b484fff5a0c7d7004961707d647a183565a21a9817b9d37f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 04 19:38:18 addons-970000 dockerd[1286]: time="2024-09-04T19:38:18.015293565Z" level=info msg="shim disconnected" id=4e4ae6c015a4d1c2b484fff5a0c7d7004961707d647a183565a21a9817b9d37f namespace=moby
	Sep 04 19:38:18 addons-970000 dockerd[1286]: time="2024-09-04T19:38:18.015325522Z" level=warning msg="cleaning up after shim disconnected" id=4e4ae6c015a4d1c2b484fff5a0c7d7004961707d647a183565a21a9817b9d37f namespace=moby
	Sep 04 19:38:18 addons-970000 dockerd[1286]: time="2024-09-04T19:38:18.015343355Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 04 19:38:18 addons-970000 dockerd[1286]: time="2024-09-04T19:38:18.055602890Z" level=info msg="shim disconnected" id=4b6fe9ce41eb96cbbc747c500a970c0b33ee72b3cc73f126fa5a64953efa6e33 namespace=moby
	Sep 04 19:38:18 addons-970000 dockerd[1286]: time="2024-09-04T19:38:18.055636889Z" level=warning msg="cleaning up after shim disconnected" id=4b6fe9ce41eb96cbbc747c500a970c0b33ee72b3cc73f126fa5a64953efa6e33 namespace=moby
	Sep 04 19:38:18 addons-970000 dockerd[1286]: time="2024-09-04T19:38:18.055641472Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 04 19:38:18 addons-970000 dockerd[1279]: time="2024-09-04T19:38:18.055795092Z" level=info msg="ignoring event" container=4b6fe9ce41eb96cbbc747c500a970c0b33ee72b3cc73f126fa5a64953efa6e33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	34ac11fa0e25e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   068ee55f3f018       gcp-auth-89d5ffd79-hx7s7
	07524b1629acf       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             9 minutes ago       Running             controller                 0                   99ed196c88bb4       ingress-nginx-controller-bc57996ff-w7xdq
	2550f8447886d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   10 minutes ago      Exited              patch                      0                   de663a6315131       ingress-nginx-admission-patch-bv58f
	0f7d823fbbecd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   10 minutes ago      Exited              create                     0                   f1c8fa41fbec1       ingress-nginx-admission-create-m2tqw
	974b235d3113e       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       11 minutes ago      Running             local-path-provisioner     0                   6e7ca48c19fec       local-path-provisioner-86d989889c-vt2vp
	f5f00871017b3       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   cfb44ef8f433a       kube-ingress-dns-minikube
	d4e15243e6fa6       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   3d352921669e6       nvidia-device-plugin-daemonset-4rl79
	46dcdf90bb9cb       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy             0                   4b6fe9ce41eb9       registry-proxy-2qzcv
	395ef8a0af1a2       registry@sha256:12120425f07de11a1b899e418d4b0ea174c8d4d572d45bdb640f93bc7ca06a3d                                             12 minutes ago      Exited              registry                   0                   4e4ae6c015a4d       registry-6fb4cdfc84-cwbqg
	b4a6b33e44cf6       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   d42c5b329ac53       cloud-spanner-emulator-769b77f747-n5nxb
	3ffbb3a49c768       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   8754a0f7d0491       yakd-dashboard-67d98fc6b-tghdh
	13130b9acdd91       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   38c58bfff05a3       storage-provisioner
	c1ea33efb2245       2437cf7621777                                                                                                                12 minutes ago      Running             coredns                    0                   54eaaf125c952       coredns-6f6b679f8f-9f28x
	3c14e64de28eb       71d55d66fd4ee                                                                                                                12 minutes ago      Running             kube-proxy                 0                   d8908019aa0a3       kube-proxy-fxmzp
	9e049193af794       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   5e6317a320108       etcd-addons-970000
	15a480f5f3e1b       fbbbd428abb4d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   9b6e64445f6fa       kube-scheduler-addons-970000
	b17584f23a445       fcb0683e6bdbd                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   e9e554b71153e       kube-controller-manager-addons-970000
	908daf31e6f99       cd0f0ae0ec9e0                                                                                                                12 minutes ago      Running             kube-apiserver             0                   a8a56d8d76a88       kube-apiserver-addons-970000
	
	
	==> controller_ingress [07524b1629ac] <==
	W0904 19:28:24.125285       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0904 19:28:24.125416       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0904 19:28:24.128449       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.0" state="clean" commit="9edcffcde5595e8a5b1a35f88c421764e575afce" platform="linux/arm64"
	I0904 19:28:24.156587       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0904 19:28:24.163865       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0904 19:28:24.168543       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0904 19:28:24.173185       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"ca7504f7-ee67-46a6-ac72-57814021f4e9", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0904 19:28:24.174431       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"95b525ab-d43f-401e-955b-39d050c22b1b", APIVersion:"v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0904 19:28:24.174442       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"8c1adfb2-d655-4d9e-8a34-f442cea64469", APIVersion:"v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0904 19:28:25.370497       7 nginx.go:317] "Starting NGINX process"
	I0904 19:28:25.370534       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0904 19:28:25.370678       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0904 19:28:25.371102       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0904 19:28:25.378563       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0904 19:28:25.378715       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-w7xdq"
	I0904 19:28:25.382539       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-w7xdq" node="addons-970000"
	I0904 19:28:25.387897       7 controller.go:213] "Backend successfully reloaded"
	I0904 19:28:25.389846       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0904 19:28:25.390095       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-w7xdq", UID:"ffbe9c17-dcb7-48d6-9c72-235b21df32d5", APIVersion:"v1", ResourceVersion:"1254", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [c1ea33efb224] <==
	[INFO] 127.0.0.1:38061 - 62447 "HINFO IN 4482410825358278108.3751266110202374879. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009653875s
	[INFO] 10.244.0.6:35735 - 32076 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150417s
	[INFO] 10.244.0.6:35735 - 35649 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000193167s
	[INFO] 10.244.0.6:40125 - 29770 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051041s
	[INFO] 10.244.0.6:40125 - 13428 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000028833s
	[INFO] 10.244.0.6:59601 - 23598 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049542s
	[INFO] 10.244.0.6:59601 - 9512 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126708s
	[INFO] 10.244.0.6:56005 - 233 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00003975s
	[INFO] 10.244.0.6:56005 - 2534 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000039542s
	[INFO] 10.244.0.6:43853 - 26661 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000043084s
	[INFO] 10.244.0.6:43853 - 8231 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000027667s
	[INFO] 10.244.0.6:38371 - 62822 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000015208s
	[INFO] 10.244.0.6:38371 - 50017 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000021083s
	[INFO] 10.244.0.6:37350 - 24825 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000020083s
	[INFO] 10.244.0.6:37350 - 21498 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000023667s
	[INFO] 10.244.0.6:37996 - 7972 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000016292s
	[INFO] 10.244.0.6:37996 - 45610 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000018292s
	[INFO] 10.244.0.24:39603 - 27853 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.005627844s
	[INFO] 10.244.0.24:44379 - 64034 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.005780927s
	[INFO] 10.244.0.24:40230 - 46069 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000762996s
	[INFO] 10.244.0.24:35533 - 25063 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000319081s
	[INFO] 10.244.0.24:43057 - 31951 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000054083s
	[INFO] 10.244.0.24:59854 - 37197 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00004975s
	[INFO] 10.244.0.24:38377 - 7021 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 192 0.002594736s
	[INFO] 10.244.0.24:34623 - 36287 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002634569s
	
	
	==> describe nodes <==
	Name:               addons-970000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-970000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af
	                    minikube.k8s.io/name=addons-970000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_04T12_25_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-970000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Sep 2024 19:25:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-970000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Sep 2024 19:38:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Sep 2024 19:34:20 +0000   Wed, 04 Sep 2024 19:25:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Sep 2024 19:34:20 +0000   Wed, 04 Sep 2024 19:25:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Sep 2024 19:34:20 +0000   Wed, 04 Sep 2024 19:25:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Sep 2024 19:34:20 +0000   Wed, 04 Sep 2024 19:25:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-970000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 d220942cb38140de900e72c7ecb1a229
	  System UUID:                d220942cb38140de900e72c7ecb1a229
	  Boot ID:                    3e655381-2c10-4087-8c0d-f62db1e5610d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-769b77f747-n5nxb     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     registry-test                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  gcp-auth                    gcp-auth-89d5ffd79-hx7s7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-w7xdq    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-9f28x                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-970000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-970000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-970000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-fxmzp                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-970000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-4rl79        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-vt2vp     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-tghdh              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             388Mi (10%)  426Mi (11%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-970000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-970000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-970000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-970000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-970000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-970000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-970000 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-970000 event: Registered Node addons-970000 in Controller
	
	
	==> dmesg <==
	[  +6.768242] kauditd_printk_skb: 70 callbacks suppressed
	[Sep 4 19:26] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.827677] kauditd_printk_skb: 4 callbacks suppressed
	[ +12.143015] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.791032] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.306558] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.742105] kauditd_printk_skb: 26 callbacks suppressed
	[Sep 4 19:27] kauditd_printk_skb: 21 callbacks suppressed
	[ +31.006260] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.512267] kauditd_printk_skb: 40 callbacks suppressed
	[Sep 4 19:28] kauditd_printk_skb: 46 callbacks suppressed
	[  +8.027923] kauditd_printk_skb: 22 callbacks suppressed
	[ +13.508484] kauditd_printk_skb: 18 callbacks suppressed
	[ +19.329884] kauditd_printk_skb: 7 callbacks suppressed
	[Sep 4 19:29] kauditd_printk_skb: 20 callbacks suppressed
	[ +19.984815] kauditd_printk_skb: 2 callbacks suppressed
	[Sep 4 19:32] kauditd_printk_skb: 2 callbacks suppressed
	[Sep 4 19:37] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.028082] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.887935] kauditd_printk_skb: 7 callbacks suppressed
	[ +16.389626] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.786775] kauditd_printk_skb: 7 callbacks suppressed
	[Sep 4 19:38] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.468668] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.255638] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [9e049193af79] <==
	{"level":"info","ts":"2024-09-04T19:25:50.740683Z","caller":"traceutil/trace.go:171","msg":"trace[1565618977] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:1; response_revision:656; }","duration":"124.356902ms","start":"2024-09-04T19:25:50.616324Z","end":"2024-09-04T19:25:50.740681Z","steps":["trace[1565618977] 'agreement among raft nodes before linearized reading'  (duration: 124.334194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T19:25:50.740710Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.755484ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T19:25:50.740719Z","caller":"traceutil/trace.go:171","msg":"trace[1845333811] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:656; }","duration":"135.765109ms","start":"2024-09-04T19:25:50.604953Z","end":"2024-09-04T19:25:50.740718Z","steps":["trace[1845333811] 'agreement among raft nodes before linearized reading'  (duration: 135.751734ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T19:25:50.740759Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.471917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.105.2\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-09-04T19:25:50.740768Z","caller":"traceutil/trace.go:171","msg":"trace[1557842050] range","detail":"{range_begin:/registry/masterleases/192.168.105.2; range_end:; response_count:1; response_revision:656; }","duration":"142.482167ms","start":"2024-09-04T19:25:50.598285Z","end":"2024-09-04T19:25:50.740767Z","steps":["trace[1557842050] 'agreement among raft nodes before linearized reading'  (duration: 142.458292ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T19:25:50.740807Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.426965ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/metrics-server-84c5f94fbc\" ","response":"range_response_count:1 size:3282"}
	{"level":"info","ts":"2024-09-04T19:25:50.740814Z","caller":"traceutil/trace.go:171","msg":"trace[820662108] range","detail":"{range_begin:/registry/replicasets/kube-system/metrics-server-84c5f94fbc; range_end:; response_count:1; response_revision:656; }","duration":"162.433757ms","start":"2024-09-04T19:25:50.578379Z","end":"2024-09-04T19:25:50.740812Z","steps":["trace[820662108] 'agreement among raft nodes before linearized reading'  (duration: 162.412715ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T19:25:50.740854Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.542173ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:5308"}
	{"level":"info","ts":"2024-09-04T19:25:50.740863Z","caller":"traceutil/trace.go:171","msg":"trace[1025559165] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:656; }","duration":"162.551632ms","start":"2024-09-04T19:25:50.578310Z","end":"2024-09-04T19:25:50.740861Z","steps":["trace[1025559165] 'agreement among raft nodes before linearized reading'  (duration: 162.529216ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T19:25:50.740898Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.56789ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\" ","response":"range_response_count:1 size:35490"}
	{"level":"info","ts":"2024-09-04T19:25:50.740908Z","caller":"traceutil/trace.go:171","msg":"trace[1009303716] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io; range_end:; response_count:1; response_revision:656; }","duration":"171.57789ms","start":"2024-09-04T19:25:50.569328Z","end":"2024-09-04T19:25:50.740906Z","steps":["trace[1009303716] 'agreement among raft nodes before linearized reading'  (duration: 171.554431ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T19:25:50.740944Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.778847ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-970000\" ","response":"range_response_count:1 size:4476"}
	{"level":"info","ts":"2024-09-04T19:25:50.740953Z","caller":"traceutil/trace.go:171","msg":"trace[1907618] range","detail":"{range_begin:/registry/minions/addons-970000; range_end:; response_count:1; response_revision:656; }","duration":"171.788305ms","start":"2024-09-04T19:25:50.569163Z","end":"2024-09-04T19:25:50.740952Z","steps":["trace[1907618] 'agreement among raft nodes before linearized reading'  (duration: 171.767597ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T19:25:58.503854Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.829924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-04T19:25:58.503887Z","caller":"traceutil/trace.go:171","msg":"trace[1732751054] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:968; }","duration":"147.901299ms","start":"2024-09-04T19:25:58.355978Z","end":"2024-09-04T19:25:58.503879Z","steps":["trace[1732751054] 'range keys from in-memory index tree'  (duration: 147.711798ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T19:25:58.503861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.231026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T19:25:58.503949Z","caller":"traceutil/trace.go:171","msg":"trace[1616514125] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:968; }","duration":"132.320651ms","start":"2024-09-04T19:25:58.371623Z","end":"2024-09-04T19:25:58.503944Z","steps":["trace[1616514125] 'range keys from in-memory index tree'  (duration: 132.213609ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-04T19:26:07.206593Z","caller":"traceutil/trace.go:171","msg":"trace[616574155] transaction","detail":"{read_only:false; response_revision:1007; number_of_response:1; }","duration":"296.877676ms","start":"2024-09-04T19:26:06.909705Z","end":"2024-09-04T19:26:07.206582Z","steps":["trace[616574155] 'process raft request'  (duration: 296.816301ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-04T19:26:26.929705Z","caller":"traceutil/trace.go:171","msg":"trace[88517706] transaction","detail":"{read_only:false; response_revision:1081; number_of_response:1; }","duration":"177.751136ms","start":"2024-09-04T19:26:26.751947Z","end":"2024-09-04T19:26:26.929699Z","steps":["trace[88517706] 'process raft request'  (duration: 176.319262ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-04T19:26:26.929705Z","caller":"traceutil/trace.go:171","msg":"trace[1815474364] linearizableReadLoop","detail":"{readStateIndex:1100; appliedIndex:1099; }","duration":"158.989365ms","start":"2024-09-04T19:26:26.770704Z","end":"2024-09-04T19:26:26.929693Z","steps":["trace[1815474364] 'read index received'  (duration: 157.550075ms)","trace[1815474364] 'applied index is now lower than readState.Index'  (duration: 1.438915ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T19:26:26.929867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.154782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" ","response":"range_response_count:1 size:2270"}
	{"level":"info","ts":"2024-09-04T19:26:26.929898Z","caller":"traceutil/trace.go:171","msg":"trace[1503295461] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:1; response_revision:1081; }","duration":"159.197323ms","start":"2024-09-04T19:26:26.770696Z","end":"2024-09-04T19:26:26.929893Z","steps":["trace[1503295461] 'agreement among raft nodes before linearized reading'  (duration: 159.011948ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-04T19:35:38.809073Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1877}
	{"level":"info","ts":"2024-09-04T19:35:38.904692Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1877,"took":"92.785724ms","hash":480879567,"current-db-size-bytes":8876032,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4853760,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-04T19:35:38.904722Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":480879567,"revision":1877,"compact-revision":-1}
	
	
	==> gcp-auth [34ac11fa0e25] <==
	2024/09/04 19:28:27 GCP Auth Webhook started!
	2024/09/04 19:28:43 Ready to marshal response ...
	2024/09/04 19:28:43 Ready to write response ...
	2024/09/04 19:28:43 Ready to marshal response ...
	2024/09/04 19:28:43 Ready to write response ...
	2024/09/04 19:29:06 Ready to marshal response ...
	2024/09/04 19:29:06 Ready to write response ...
	2024/09/04 19:29:06 Ready to marshal response ...
	2024/09/04 19:29:06 Ready to write response ...
	2024/09/04 19:29:06 Ready to marshal response ...
	2024/09/04 19:29:06 Ready to write response ...
	2024/09/04 19:37:17 Ready to marshal response ...
	2024/09/04 19:37:17 Ready to write response ...
	2024/09/04 19:37:24 Ready to marshal response ...
	2024/09/04 19:37:24 Ready to write response ...
	2024/09/04 19:37:48 Ready to marshal response ...
	2024/09/04 19:37:48 Ready to write response ...
	
	
	==> kernel <==
	 19:38:18 up 12 min,  0 users,  load average: 0.59, 0.64, 0.46
	Linux addons-970000 5.10.207 #1 SMP PREEMPT Tue Sep 3 18:23:52 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [908daf31e6f9] <==
	I0904 19:28:56.908439       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0904 19:28:57.020396       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0904 19:28:57.565211       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0904 19:28:57.887206       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0904 19:28:57.887206       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0904 19:28:57.904474       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0904 19:28:57.982473       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0904 19:28:58.022243       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W0904 19:28:58.047850       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	I0904 19:37:31.797451       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0904 19:38:03.702461       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 19:38:03.702491       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 19:38:03.707969       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 19:38:03.707983       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 19:38:03.743880       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 19:38:03.743991       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 19:38:03.751220       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 19:38:03.751248       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 19:38:03.770638       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 19:38:03.770659       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0904 19:38:04.751596       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0904 19:38:04.771060       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0904 19:38:04.837214       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0904 19:38:14.491431       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0904 19:38:15.501445       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [b17584f23a44] <==
	W0904 19:38:08.986411       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:38:08.986502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0904 19:38:09.236558       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="1.416µs"
	W0904 19:38:12.400035       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:38:12.400140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 19:38:15.045212       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:38:15.045336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 19:38:15.123306       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:38:15.123399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 19:38:15.276166       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:38:15.276210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 19:38:15.340129       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:38:15.340173       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0904 19:38:15.502219       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 19:38:16.470367       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:38:16.470484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0904 19:38:16.577948       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0904 19:38:16.578020       1 shared_informer.go:320] Caches are synced for resource quota
	I0904 19:38:16.943036       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0904 19:38:16.943137       1 shared_informer.go:320] Caches are synced for garbage collector
	I0904 19:38:17.901812       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="1.541µs"
	W0904 19:38:18.021763       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:38:18.021785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 19:38:18.321386       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:38:18.321410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [3c14e64de28e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0904 19:25:47.011310       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0904 19:25:47.019970       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0904 19:25:47.020006       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 19:25:47.062286       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0904 19:25:47.062306       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0904 19:25:47.062328       1 server_linux.go:169] "Using iptables Proxier"
	I0904 19:25:47.063094       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 19:25:47.063212       1 server.go:483] "Version info" version="v1.31.0"
	I0904 19:25:47.063218       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 19:25:47.064196       1 config.go:197] "Starting service config controller"
	I0904 19:25:47.064204       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0904 19:25:47.064219       1 config.go:104] "Starting endpoint slice config controller"
	I0904 19:25:47.064221       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0904 19:25:47.064418       1 config.go:326] "Starting node config controller"
	I0904 19:25:47.064421       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0904 19:25:47.165366       1 shared_informer.go:320] Caches are synced for node config
	I0904 19:25:47.165391       1 shared_informer.go:320] Caches are synced for service config
	I0904 19:25:47.165503       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [15a480f5f3e1] <==
	W0904 19:25:38.833394       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0904 19:25:38.833430       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 19:25:38.833420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0904 19:25:38.833458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 19:25:38.833508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0904 19:25:38.833517       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0904 19:25:38.833536       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0904 19:25:38.833545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 19:25:38.833566       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0904 19:25:38.833593       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 19:25:38.833634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0904 19:25:38.833661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 19:25:38.834412       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0904 19:25:38.834423       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0904 19:25:39.655398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0904 19:25:39.655510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 19:25:39.786365       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0904 19:25:39.786825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 19:25:39.812926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0904 19:25:39.812980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 19:25:39.878752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0904 19:25:39.878850       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 19:25:39.883645       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0904 19:25:39.883703       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0904 19:25:42.730804       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 04 19:38:14 addons-970000 kubelet[2051]: I0904 19:38:14.764235    2051 reconciler_common.go:288] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d1f1f896-446b-4ecb-b3be-9fce7b2c33e1-host\") on node \"addons-970000\" DevicePath \"\""
	Sep 04 19:38:14 addons-970000 kubelet[2051]: I0904 19:38:14.764241    2051 reconciler_common.go:288] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/d1f1f896-446b-4ecb-b3be-9fce7b2c33e1-cgroup\") on node \"addons-970000\" DevicePath \"\""
	Sep 04 19:38:14 addons-970000 kubelet[2051]: I0904 19:38:14.764248    2051 reconciler_common.go:288] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/d1f1f896-446b-4ecb-b3be-9fce7b2c33e1-modules\") on node \"addons-970000\" DevicePath \"\""
	Sep 04 19:38:15 addons-970000 kubelet[2051]: I0904 19:38:15.481686    2051 scope.go:117] "RemoveContainer" containerID="763d5083d07c78ae6e6b346b093ee799e767d984f8b22f37db01d876a124f297"
	Sep 04 19:38:17 addons-970000 kubelet[2051]: I0904 19:38:17.267528    2051 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1f1f896-446b-4ecb-b3be-9fce7b2c33e1" path="/var/lib/kubelet/pods/d1f1f896-446b-4ecb-b3be-9fce7b2c33e1/volumes"
	Sep 04 19:38:17 addons-970000 kubelet[2051]: I0904 19:38:17.903115    2051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wn2r\" (UniqueName: \"kubernetes.io/projected/a04b9460-f2bf-4d98-b2c1-681facd48bc2-kube-api-access-8wn2r\") pod \"a04b9460-f2bf-4d98-b2c1-681facd48bc2\" (UID: \"a04b9460-f2bf-4d98-b2c1-681facd48bc2\") "
	Sep 04 19:38:17 addons-970000 kubelet[2051]: I0904 19:38:17.903141    2051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a04b9460-f2bf-4d98-b2c1-681facd48bc2-gcp-creds\") pod \"a04b9460-f2bf-4d98-b2c1-681facd48bc2\" (UID: \"a04b9460-f2bf-4d98-b2c1-681facd48bc2\") "
	Sep 04 19:38:17 addons-970000 kubelet[2051]: I0904 19:38:17.903187    2051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a04b9460-f2bf-4d98-b2c1-681facd48bc2-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "a04b9460-f2bf-4d98-b2c1-681facd48bc2" (UID: "a04b9460-f2bf-4d98-b2c1-681facd48bc2"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 04 19:38:17 addons-970000 kubelet[2051]: I0904 19:38:17.904835    2051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04b9460-f2bf-4d98-b2c1-681facd48bc2-kube-api-access-8wn2r" (OuterVolumeSpecName: "kube-api-access-8wn2r") pod "a04b9460-f2bf-4d98-b2c1-681facd48bc2" (UID: "a04b9460-f2bf-4d98-b2c1-681facd48bc2"). InnerVolumeSpecName "kube-api-access-8wn2r". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 04 19:38:18 addons-970000 kubelet[2051]: I0904 19:38:18.003316    2051 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a04b9460-f2bf-4d98-b2c1-681facd48bc2-gcp-creds\") on node \"addons-970000\" DevicePath \"\""
	Sep 04 19:38:18 addons-970000 kubelet[2051]: I0904 19:38:18.003333    2051 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8wn2r\" (UniqueName: \"kubernetes.io/projected/a04b9460-f2bf-4d98-b2c1-681facd48bc2-kube-api-access-8wn2r\") on node \"addons-970000\" DevicePath \"\""
	Sep 04 19:38:18 addons-970000 kubelet[2051]: I0904 19:38:18.103815    2051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55kdj\" (UniqueName: \"kubernetes.io/projected/f7b9bfe5-0693-429f-b374-e1fdc2260b34-kube-api-access-55kdj\") pod \"f7b9bfe5-0693-429f-b374-e1fdc2260b34\" (UID: \"f7b9bfe5-0693-429f-b374-e1fdc2260b34\") "
	Sep 04 19:38:18 addons-970000 kubelet[2051]: I0904 19:38:18.104454    2051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7b9bfe5-0693-429f-b374-e1fdc2260b34-kube-api-access-55kdj" (OuterVolumeSpecName: "kube-api-access-55kdj") pod "f7b9bfe5-0693-429f-b374-e1fdc2260b34" (UID: "f7b9bfe5-0693-429f-b374-e1fdc2260b34"). InnerVolumeSpecName "kube-api-access-55kdj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 04 19:38:18 addons-970000 kubelet[2051]: I0904 19:38:18.204206    2051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6lsr\" (UniqueName: \"kubernetes.io/projected/0bbedfdb-5af5-493c-82d3-98bada5a51cc-kube-api-access-v6lsr\") pod \"0bbedfdb-5af5-493c-82d3-98bada5a51cc\" (UID: \"0bbedfdb-5af5-493c-82d3-98bada5a51cc\") "
	Sep 04 19:38:18 addons-970000 kubelet[2051]: I0904 19:38:18.204249    2051 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-55kdj\" (UniqueName: \"kubernetes.io/projected/f7b9bfe5-0693-429f-b374-e1fdc2260b34-kube-api-access-55kdj\") on node \"addons-970000\" DevicePath \"\""
	Sep 04 19:38:18 addons-970000 kubelet[2051]: I0904 19:38:18.204888    2051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bbedfdb-5af5-493c-82d3-98bada5a51cc-kube-api-access-v6lsr" (OuterVolumeSpecName: "kube-api-access-v6lsr") pod "0bbedfdb-5af5-493c-82d3-98bada5a51cc" (UID: "0bbedfdb-5af5-493c-82d3-98bada5a51cc"). InnerVolumeSpecName "kube-api-access-v6lsr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 04 19:38:18 addons-970000 kubelet[2051]: I0904 19:38:18.305086    2051 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-v6lsr\" (UniqueName: \"kubernetes.io/projected/0bbedfdb-5af5-493c-82d3-98bada5a51cc-kube-api-access-v6lsr\") on node \"addons-970000\" DevicePath \"\""
	Sep 04 19:38:18 addons-970000 kubelet[2051]: I0904 19:38:18.538073    2051 scope.go:117] "RemoveContainer" containerID="46dcdf90bb9cb08fa5466ca78613c652e76e8117bac870e2d88e977ab4595840"
	Sep 04 19:38:18 addons-970000 kubelet[2051]: I0904 19:38:18.566916    2051 scope.go:117] "RemoveContainer" containerID="46dcdf90bb9cb08fa5466ca78613c652e76e8117bac870e2d88e977ab4595840"
	Sep 04 19:38:18 addons-970000 kubelet[2051]: E0904 19:38:18.567921    2051 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 46dcdf90bb9cb08fa5466ca78613c652e76e8117bac870e2d88e977ab4595840" containerID="46dcdf90bb9cb08fa5466ca78613c652e76e8117bac870e2d88e977ab4595840"
	Sep 04 19:38:18 addons-970000 kubelet[2051]: I0904 19:38:18.567940    2051 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"46dcdf90bb9cb08fa5466ca78613c652e76e8117bac870e2d88e977ab4595840"} err="failed to get container status \"46dcdf90bb9cb08fa5466ca78613c652e76e8117bac870e2d88e977ab4595840\": rpc error: code = Unknown desc = Error response from daemon: No such container: 46dcdf90bb9cb08fa5466ca78613c652e76e8117bac870e2d88e977ab4595840"
	Sep 04 19:38:18 addons-970000 kubelet[2051]: I0904 19:38:18.567953    2051 scope.go:117] "RemoveContainer" containerID="395ef8a0af1a21c06c5393eb22be47cbed0aa155bf4e04f3adfc104f85c39d44"
	Sep 04 19:38:18 addons-970000 kubelet[2051]: I0904 19:38:18.576779    2051 scope.go:117] "RemoveContainer" containerID="395ef8a0af1a21c06c5393eb22be47cbed0aa155bf4e04f3adfc104f85c39d44"
	Sep 04 19:38:18 addons-970000 kubelet[2051]: E0904 19:38:18.577635    2051 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 395ef8a0af1a21c06c5393eb22be47cbed0aa155bf4e04f3adfc104f85c39d44" containerID="395ef8a0af1a21c06c5393eb22be47cbed0aa155bf4e04f3adfc104f85c39d44"
	Sep 04 19:38:18 addons-970000 kubelet[2051]: I0904 19:38:18.577719    2051 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"395ef8a0af1a21c06c5393eb22be47cbed0aa155bf4e04f3adfc104f85c39d44"} err="failed to get container status \"395ef8a0af1a21c06c5393eb22be47cbed0aa155bf4e04f3adfc104f85c39d44\": rpc error: code = Unknown desc = Error response from daemon: No such container: 395ef8a0af1a21c06c5393eb22be47cbed0aa155bf4e04f3adfc104f85c39d44"
	
	
	==> storage-provisioner [13130b9acdd9] <==
	I0904 19:25:50.056416       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0904 19:25:50.094423       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0904 19:25:50.094448       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0904 19:25:50.198823       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0904 19:25:50.198950       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-970000_3bac47ae-d5cd-47ed-a7cf-4c8caeedda86!
	I0904 19:25:50.199022       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"121d56bd-8e39-458a-8838-350c1228431b", APIVersion:"v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-970000_3bac47ae-d5cd-47ed-a7cf-4c8caeedda86 became leader
	I0904 19:25:50.343081       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-970000_3bac47ae-d5cd-47ed-a7cf-4c8caeedda86!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-970000 -n addons-970000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-970000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-m2tqw ingress-nginx-admission-patch-bv58f
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-970000 describe pod busybox ingress-nginx-admission-create-m2tqw ingress-nginx-admission-patch-bv58f
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-970000 describe pod busybox ingress-nginx-admission-create-m2tqw ingress-nginx-admission-patch-bv58f: exit status 1 (42.445292ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-970000/192.168.105.2
	Start Time:       Wed, 04 Sep 2024 12:29:06 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4kxn8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4kxn8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m12s                  default-scheduler  Successfully assigned default/busybox to addons-970000
	  Normal   Pulling    7m42s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m42s (x4 over 9m12s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m42s (x4 over 9m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m30s (x6 over 9m11s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m5s (x21 over 9m11s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-m2tqw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bv58f" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-970000 describe pod busybox ingress-nginx-admission-create-m2tqw ingress-nginx-admission-patch-bv58f: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.29s)

                                                
                                    
x
+
TestCertOptions (10.23s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-659000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-659000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.970185375s)

                                                
                                                
-- stdout --
	* [cert-options-659000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-659000" primary control-plane node in "cert-options-659000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-659000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-659000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-659000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-659000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-659000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (78.023833ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-659000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-659000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-659000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-659000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-659000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-659000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.283208ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-659000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-659000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-659000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-659000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-659000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-04 13:11:09.682421 -0700 PDT m=+2786.741021334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-659000 -n cert-options-659000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-659000 -n cert-options-659000: exit status 7 (30.719125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-659000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-659000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-659000
--- FAIL: TestCertOptions (10.23s)

                                                
                                    
x
+
TestCertExpiration (195.24s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-733000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-733000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.892659375s)

                                                
                                                
-- stdout --
	* [cert-expiration-733000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-733000" primary control-plane node in "cert-expiration-733000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-733000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-733000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-733000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-733000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-733000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.212620833s)

                                                
                                                
-- stdout --
	* [cert-expiration-733000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-733000" primary control-plane node in "cert-expiration-733000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-733000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-733000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-733000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-733000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-733000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-733000" primary control-plane node in "cert-expiration-733000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-733000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-733000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-733000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-04 13:14:09.654331 -0700 PDT m=+2966.716018084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-733000 -n cert-expiration-733000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-733000 -n cert-expiration-733000: exit status 7 (52.490833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-733000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-733000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-733000
--- FAIL: TestCertExpiration (195.24s)

                                                
                                    
x
+
TestDockerFlags (10.38s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-174000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-174000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.155080417s)

                                                
                                                
-- stdout --
	* [docker-flags-174000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-174000" primary control-plane node in "docker-flags-174000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-174000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:10:49.201709    4383 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:10:49.201815    4383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:10:49.201819    4383 out.go:358] Setting ErrFile to fd 2...
	I0904 13:10:49.201832    4383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:10:49.201955    4383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:10:49.203008    4383 out.go:352] Setting JSON to false
	I0904 13:10:49.219197    4383 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4213,"bootTime":1725476436,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:10:49.219257    4383 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:10:49.226250    4383 out.go:177] * [docker-flags-174000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:10:49.233088    4383 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:10:49.233140    4383 notify.go:220] Checking for updates...
	I0904 13:10:49.241177    4383 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:10:49.244118    4383 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:10:49.247176    4383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:10:49.250077    4383 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:10:49.253122    4383 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:10:49.256470    4383 config.go:182] Loaded profile config "force-systemd-flag-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:10:49.256537    4383 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:10:49.256589    4383 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:10:49.261048    4383 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:10:49.268092    4383 start.go:297] selected driver: qemu2
	I0904 13:10:49.268098    4383 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:10:49.268109    4383 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:10:49.270450    4383 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:10:49.274122    4383 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:10:49.277178    4383 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0904 13:10:49.277218    4383 cni.go:84] Creating CNI manager for ""
	I0904 13:10:49.277226    4383 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:10:49.277230    4383 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 13:10:49.277258    4383 start.go:340] cluster config:
	{Name:docker-flags-174000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-174000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket
_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:10:49.280964    4383 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:10:49.289046    4383 out.go:177] * Starting "docker-flags-174000" primary control-plane node in "docker-flags-174000" cluster
	I0904 13:10:49.293130    4383 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:10:49.293150    4383 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:10:49.293160    4383 cache.go:56] Caching tarball of preloaded images
	I0904 13:10:49.293240    4383 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:10:49.293247    4383 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:10:49.293328    4383 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/docker-flags-174000/config.json ...
	I0904 13:10:49.293341    4383 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/docker-flags-174000/config.json: {Name:mk28d9b7ac1e85c35ef7cdc09384caaae3d93f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:10:49.293563    4383 start.go:360] acquireMachinesLock for docker-flags-174000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:10:49.293607    4383 start.go:364] duration metric: took 34.625µs to acquireMachinesLock for "docker-flags-174000"
	I0904 13:10:49.293620    4383 start.go:93] Provisioning new machine with config: &{Name:docker-flags-174000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:docker-flags-174000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:10:49.293653    4383 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:10:49.302061    4383 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0904 13:10:49.320751    4383 start.go:159] libmachine.API.Create for "docker-flags-174000" (driver="qemu2")
	I0904 13:10:49.320777    4383 client.go:168] LocalClient.Create starting
	I0904 13:10:49.320854    4383 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:10:49.320886    4383 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:49.320895    4383 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:49.320939    4383 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:10:49.320963    4383 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:49.320970    4383 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:49.321329    4383 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:10:49.473705    4383 main.go:141] libmachine: Creating SSH key...
	I0904 13:10:49.679709    4383 main.go:141] libmachine: Creating Disk image...
	I0904 13:10:49.679718    4383 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:10:49.679956    4383 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/disk.qcow2
	I0904 13:10:49.689928    4383 main.go:141] libmachine: STDOUT: 
	I0904 13:10:49.690010    4383 main.go:141] libmachine: STDERR: 
	I0904 13:10:49.690075    4383 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/disk.qcow2 +20000M
	I0904 13:10:49.697963    4383 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:10:49.697986    4383 main.go:141] libmachine: STDERR: 
	I0904 13:10:49.697997    4383 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/disk.qcow2
	I0904 13:10:49.698003    4383 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:10:49.698018    4383 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:10:49.698045    4383 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:0d:20:ea:0e:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/disk.qcow2
	I0904 13:10:49.699655    4383 main.go:141] libmachine: STDOUT: 
	I0904 13:10:49.699673    4383 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:10:49.699692    4383 client.go:171] duration metric: took 378.917666ms to LocalClient.Create
	I0904 13:10:51.701821    4383 start.go:128] duration metric: took 2.40819375s to createHost
	I0904 13:10:51.701862    4383 start.go:83] releasing machines lock for "docker-flags-174000", held for 2.40828375s
	W0904 13:10:51.701940    4383 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:10:51.719150    4383 out.go:177] * Deleting "docker-flags-174000" in qemu2 ...
	W0904 13:10:51.750082    4383 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:10:51.750105    4383 start.go:729] Will try again in 5 seconds ...
	I0904 13:10:56.752290    4383 start.go:360] acquireMachinesLock for docker-flags-174000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:10:56.962503    4383 start.go:364] duration metric: took 210.026709ms to acquireMachinesLock for "docker-flags-174000"
	I0904 13:10:56.962594    4383 start.go:93] Provisioning new machine with config: &{Name:docker-flags-174000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:docker-flags-174000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:10:56.962830    4383 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:10:56.973299    4383 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0904 13:10:57.022172    4383 start.go:159] libmachine.API.Create for "docker-flags-174000" (driver="qemu2")
	I0904 13:10:57.022219    4383 client.go:168] LocalClient.Create starting
	I0904 13:10:57.022345    4383 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:10:57.022413    4383 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:57.022429    4383 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:57.022499    4383 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:10:57.022543    4383 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:57.022559    4383 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:57.023116    4383 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:10:57.199300    4383 main.go:141] libmachine: Creating SSH key...
	I0904 13:10:57.253728    4383 main.go:141] libmachine: Creating Disk image...
	I0904 13:10:57.253733    4383 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:10:57.253909    4383 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/disk.qcow2
	I0904 13:10:57.263101    4383 main.go:141] libmachine: STDOUT: 
	I0904 13:10:57.263124    4383 main.go:141] libmachine: STDERR: 
	I0904 13:10:57.263169    4383 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/disk.qcow2 +20000M
	I0904 13:10:57.271065    4383 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:10:57.271077    4383 main.go:141] libmachine: STDERR: 
	I0904 13:10:57.271092    4383 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/disk.qcow2
	I0904 13:10:57.271097    4383 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:10:57.271108    4383 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:10:57.271136    4383 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:0a:f6:f8:73:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/docker-flags-174000/disk.qcow2
	I0904 13:10:57.272663    4383 main.go:141] libmachine: STDOUT: 
	I0904 13:10:57.272679    4383 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:10:57.272692    4383 client.go:171] duration metric: took 250.469917ms to LocalClient.Create
	I0904 13:10:59.274870    4383 start.go:128] duration metric: took 2.312032792s to createHost
	I0904 13:10:59.274935    4383 start.go:83] releasing machines lock for "docker-flags-174000", held for 2.312439s
	W0904 13:10:59.275292    4383 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-174000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-174000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:10:59.296877    4383 out.go:201] 
	W0904 13:10:59.301780    4383 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:10:59.301805    4383 out.go:270] * 
	* 
	W0904 13:10:59.304305    4383 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:10:59.314718    4383 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-174000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-174000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-174000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (77.001875ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-174000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-174000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-174000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-174000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-174000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-174000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-174000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-174000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (42.790375ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-174000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-174000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-174000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-174000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-174000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-09-04 13:10:59.452657 -0700 PDT m=+2776.511082167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-174000 -n docker-flags-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-174000 -n docker-flags-174000: exit status 7 (29.156542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-174000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-174000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-174000
--- FAIL: TestDockerFlags (10.38s)

                                                
                                    
x
+
TestForceSystemdFlag (10.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-747000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-747000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.379642709s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-747000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-747000" primary control-plane node in "force-systemd-flag-747000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-747000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:10:44.004987    4362 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:10:44.005119    4362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:10:44.005122    4362 out.go:358] Setting ErrFile to fd 2...
	I0904 13:10:44.005125    4362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:10:44.005270    4362 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:10:44.006328    4362 out.go:352] Setting JSON to false
	I0904 13:10:44.022533    4362 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4208,"bootTime":1725476436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:10:44.022604    4362 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:10:44.029329    4362 out.go:177] * [force-systemd-flag-747000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:10:44.036180    4362 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:10:44.036209    4362 notify.go:220] Checking for updates...
	I0904 13:10:44.044235    4362 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:10:44.047253    4362 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:10:44.050313    4362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:10:44.053239    4362 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:10:44.056234    4362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:10:44.059491    4362 config.go:182] Loaded profile config "force-systemd-env-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:10:44.059560    4362 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:10:44.059604    4362 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:10:44.064206    4362 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:10:44.071213    4362 start.go:297] selected driver: qemu2
	I0904 13:10:44.071219    4362 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:10:44.071226    4362 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:10:44.073548    4362 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:10:44.077315    4362 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:10:44.080262    4362 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 13:10:44.080308    4362 cni.go:84] Creating CNI manager for ""
	I0904 13:10:44.080317    4362 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:10:44.080321    4362 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 13:10:44.080366    4362 start.go:340] cluster config:
	{Name:force-systemd-flag-747000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:10:44.084008    4362 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:10:44.091175    4362 out.go:177] * Starting "force-systemd-flag-747000" primary control-plane node in "force-systemd-flag-747000" cluster
	I0904 13:10:44.095210    4362 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:10:44.095226    4362 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:10:44.095234    4362 cache.go:56] Caching tarball of preloaded images
	I0904 13:10:44.095286    4362 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:10:44.095292    4362 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:10:44.095349    4362 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/force-systemd-flag-747000/config.json ...
	I0904 13:10:44.095361    4362 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/force-systemd-flag-747000/config.json: {Name:mkdbe81a88afe2d9a245d36b1e5cef7172528b77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:10:44.095590    4362 start.go:360] acquireMachinesLock for force-systemd-flag-747000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:10:44.095625    4362 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "force-systemd-flag-747000"
	I0904 13:10:44.095637    4362 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-sy
stemd-flag-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:10:44.095672    4362 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:10:44.104208    4362 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0904 13:10:44.121403    4362 start.go:159] libmachine.API.Create for "force-systemd-flag-747000" (driver="qemu2")
	I0904 13:10:44.121429    4362 client.go:168] LocalClient.Create starting
	I0904 13:10:44.121494    4362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:10:44.121522    4362 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:44.121531    4362 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:44.121574    4362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:10:44.121596    4362 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:44.121604    4362 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:44.122104    4362 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:10:44.273157    4362 main.go:141] libmachine: Creating SSH key...
	I0904 13:10:44.522063    4362 main.go:141] libmachine: Creating Disk image...
	I0904 13:10:44.522070    4362 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:10:44.522360    4362 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/disk.qcow2
	I0904 13:10:44.532346    4362 main.go:141] libmachine: STDOUT: 
	I0904 13:10:44.532366    4362 main.go:141] libmachine: STDERR: 
	I0904 13:10:44.532416    4362 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/disk.qcow2 +20000M
	I0904 13:10:44.540354    4362 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:10:44.540378    4362 main.go:141] libmachine: STDERR: 
	I0904 13:10:44.540394    4362 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/disk.qcow2
	I0904 13:10:44.540399    4362 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:10:44.540410    4362 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:10:44.540438    4362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:9e:1f:7f:38:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/disk.qcow2
	I0904 13:10:44.542041    4362 main.go:141] libmachine: STDOUT: 
	I0904 13:10:44.542055    4362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:10:44.542073    4362 client.go:171] duration metric: took 420.6475ms to LocalClient.Create
	I0904 13:10:46.544204    4362 start.go:128] duration metric: took 2.448555083s to createHost
	I0904 13:10:46.544249    4362 start.go:83] releasing machines lock for "force-systemd-flag-747000", held for 2.4486535s
	W0904 13:10:46.544312    4362 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:10:46.564356    4362 out.go:177] * Deleting "force-systemd-flag-747000" in qemu2 ...
	W0904 13:10:46.592478    4362 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:10:46.592497    4362 start.go:729] Will try again in 5 seconds ...
	I0904 13:10:51.594675    4362 start.go:360] acquireMachinesLock for force-systemd-flag-747000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:10:51.701976    4362 start.go:364] duration metric: took 107.174625ms to acquireMachinesLock for "force-systemd-flag-747000"
	I0904 13:10:51.702133    4362 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-sy
stemd-flag-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:10:51.702439    4362 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:10:51.714224    4362 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0904 13:10:51.763381    4362 start.go:159] libmachine.API.Create for "force-systemd-flag-747000" (driver="qemu2")
	I0904 13:10:51.763431    4362 client.go:168] LocalClient.Create starting
	I0904 13:10:51.763555    4362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:10:51.763624    4362 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:51.763651    4362 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:51.763712    4362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:10:51.763757    4362 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:51.763771    4362 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:51.764305    4362 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:10:51.938861    4362 main.go:141] libmachine: Creating SSH key...
	I0904 13:10:52.291145    4362 main.go:141] libmachine: Creating Disk image...
	I0904 13:10:52.291158    4362 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:10:52.291407    4362 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/disk.qcow2
	I0904 13:10:52.301338    4362 main.go:141] libmachine: STDOUT: 
	I0904 13:10:52.301360    4362 main.go:141] libmachine: STDERR: 
	I0904 13:10:52.301423    4362 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/disk.qcow2 +20000M
	I0904 13:10:52.309327    4362 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:10:52.309354    4362 main.go:141] libmachine: STDERR: 
	I0904 13:10:52.309368    4362 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/disk.qcow2
	I0904 13:10:52.309374    4362 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:10:52.309381    4362 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:10:52.309417    4362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:cb:52:30:29:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-flag-747000/disk.qcow2
	I0904 13:10:52.311035    4362 main.go:141] libmachine: STDOUT: 
	I0904 13:10:52.311053    4362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:10:52.311065    4362 client.go:171] duration metric: took 547.639125ms to LocalClient.Create
	I0904 13:10:54.312807    4362 start.go:128] duration metric: took 2.610370917s to createHost
	I0904 13:10:54.312871    4362 start.go:83] releasing machines lock for "force-systemd-flag-747000", held for 2.610915375s
	W0904 13:10:54.313221    4362 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-747000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-747000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:10:54.324493    4362 out.go:201] 
	W0904 13:10:54.328763    4362 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:10:54.328799    4362 out.go:270] * 
	* 
	W0904 13:10:54.331718    4362 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:10:54.341723    4362 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-747000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-747000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-747000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.734583ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-747000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-747000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-747000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-04 13:10:54.436369 -0700 PDT m=+2771.494708334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-747000 -n force-systemd-flag-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-747000 -n force-systemd-flag-747000: exit status 7 (34.236041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-747000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-747000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-747000
--- FAIL: TestForceSystemdFlag (10.57s)

                                                
                                    
x
+
TestForceSystemdEnv (10.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-978000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-978000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.190341583s)

                                                
                                                
-- stdout --
	* [force-systemd-env-978000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-978000" primary control-plane node in "force-systemd-env-978000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-978000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:10:38.822505    4327 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:10:38.822619    4327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:10:38.822623    4327 out.go:358] Setting ErrFile to fd 2...
	I0904 13:10:38.822625    4327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:10:38.822750    4327 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:10:38.823804    4327 out.go:352] Setting JSON to false
	I0904 13:10:38.840142    4327 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4202,"bootTime":1725476436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:10:38.840203    4327 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:10:38.846273    4327 out.go:177] * [force-systemd-env-978000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:10:38.853326    4327 notify.go:220] Checking for updates...
	I0904 13:10:38.858265    4327 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:10:38.866192    4327 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:10:38.877258    4327 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:10:38.884225    4327 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:10:38.891210    4327 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:10:38.899237    4327 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0904 13:10:38.903540    4327 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:10:38.903586    4327 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:10:38.907215    4327 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:10:38.914188    4327 start.go:297] selected driver: qemu2
	I0904 13:10:38.914194    4327 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:10:38.914198    4327 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:10:38.916538    4327 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:10:38.921220    4327 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:10:38.925270    4327 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 13:10:38.925285    4327 cni.go:84] Creating CNI manager for ""
	I0904 13:10:38.925292    4327 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:10:38.925296    4327 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 13:10:38.925330    4327 start.go:340] cluster config:
	{Name:force-systemd-env-978000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:10:38.928951    4327 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:10:38.937245    4327 out.go:177] * Starting "force-systemd-env-978000" primary control-plane node in "force-systemd-env-978000" cluster
	I0904 13:10:38.941164    4327 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:10:38.941183    4327 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:10:38.941190    4327 cache.go:56] Caching tarball of preloaded images
	I0904 13:10:38.941245    4327 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:10:38.941251    4327 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:10:38.941312    4327 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/force-systemd-env-978000/config.json ...
	I0904 13:10:38.941323    4327 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/force-systemd-env-978000/config.json: {Name:mk5d3fdb307f1956ec01343608da4d00389b3760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:10:38.941618    4327 start.go:360] acquireMachinesLock for force-systemd-env-978000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:10:38.941653    4327 start.go:364] duration metric: took 27.917µs to acquireMachinesLock for "force-systemd-env-978000"
	I0904 13:10:38.941665    4327 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-978000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-sys
temd-env-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:10:38.941697    4327 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:10:38.949226    4327 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0904 13:10:38.965484    4327 start.go:159] libmachine.API.Create for "force-systemd-env-978000" (driver="qemu2")
	I0904 13:10:38.965507    4327 client.go:168] LocalClient.Create starting
	I0904 13:10:38.965569    4327 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:10:38.965599    4327 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:38.965617    4327 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:38.965652    4327 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:10:38.965674    4327 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:38.965683    4327 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:38.965998    4327 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:10:39.165639    4327 main.go:141] libmachine: Creating SSH key...
	I0904 13:10:39.234574    4327 main.go:141] libmachine: Creating Disk image...
	I0904 13:10:39.234583    4327 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:10:39.234834    4327 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/disk.qcow2
	I0904 13:10:39.244744    4327 main.go:141] libmachine: STDOUT: 
	I0904 13:10:39.244778    4327 main.go:141] libmachine: STDERR: 
	I0904 13:10:39.244854    4327 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/disk.qcow2 +20000M
	I0904 13:10:39.253956    4327 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:10:39.253980    4327 main.go:141] libmachine: STDERR: 
	I0904 13:10:39.253996    4327 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/disk.qcow2
	I0904 13:10:39.254002    4327 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:10:39.254022    4327 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:10:39.254061    4327 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:3e:2e:ec:cf:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/disk.qcow2
	I0904 13:10:39.255941    4327 main.go:141] libmachine: STDOUT: 
	I0904 13:10:39.255958    4327 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:10:39.255979    4327 client.go:171] duration metric: took 290.471792ms to LocalClient.Create
	I0904 13:10:41.258233    4327 start.go:128] duration metric: took 2.316541084s to createHost
	I0904 13:10:41.258304    4327 start.go:83] releasing machines lock for "force-systemd-env-978000", held for 2.316681583s
	W0904 13:10:41.258363    4327 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:10:41.269608    4327 out.go:177] * Deleting "force-systemd-env-978000" in qemu2 ...
	W0904 13:10:41.302321    4327 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:10:41.302352    4327 start.go:729] Will try again in 5 seconds ...
	I0904 13:10:46.304442    4327 start.go:360] acquireMachinesLock for force-systemd-env-978000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:10:46.544431    4327 start.go:364] duration metric: took 239.86025ms to acquireMachinesLock for "force-systemd-env-978000"
	I0904 13:10:46.544587    4327 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-978000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-sys
temd-env-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:10:46.544776    4327 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:10:46.555500    4327 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0904 13:10:46.607270    4327 start.go:159] libmachine.API.Create for "force-systemd-env-978000" (driver="qemu2")
	I0904 13:10:46.607326    4327 client.go:168] LocalClient.Create starting
	I0904 13:10:46.607442    4327 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:10:46.607511    4327 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:46.607527    4327 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:46.607593    4327 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:10:46.607635    4327 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:46.607650    4327 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:46.608291    4327 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:10:46.799325    4327 main.go:141] libmachine: Creating SSH key...
	I0904 13:10:46.917930    4327 main.go:141] libmachine: Creating Disk image...
	I0904 13:10:46.917935    4327 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:10:46.918128    4327 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/disk.qcow2
	I0904 13:10:46.927516    4327 main.go:141] libmachine: STDOUT: 
	I0904 13:10:46.927533    4327 main.go:141] libmachine: STDERR: 
	I0904 13:10:46.927593    4327 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/disk.qcow2 +20000M
	I0904 13:10:46.935432    4327 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:10:46.935456    4327 main.go:141] libmachine: STDERR: 
	I0904 13:10:46.935468    4327 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/disk.qcow2
	I0904 13:10:46.935476    4327 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:10:46.935486    4327 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:10:46.935511    4327 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:e0:de:18:e9:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/force-systemd-env-978000/disk.qcow2
	I0904 13:10:46.937126    4327 main.go:141] libmachine: STDOUT: 
	I0904 13:10:46.937141    4327 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:10:46.937154    4327 client.go:171] duration metric: took 329.828084ms to LocalClient.Create
	I0904 13:10:48.939322    4327 start.go:128] duration metric: took 2.394550417s to createHost
	I0904 13:10:48.939383    4327 start.go:83] releasing machines lock for "force-systemd-env-978000", held for 2.394965667s
	W0904 13:10:48.939690    4327 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-978000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-978000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:10:48.952128    4327 out.go:201] 
	W0904 13:10:48.957033    4327 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:10:48.957061    4327 out.go:270] * 
	* 
	W0904 13:10:48.959759    4327 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:10:48.969106    4327 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-978000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-978000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-978000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.784708ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-978000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-978000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-978000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-04 13:10:49.063099 -0700 PDT m=+2766.121346251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-978000 -n force-systemd-env-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-978000 -n force-systemd-env-978000: exit status 7 (33.421917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-978000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-978000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-978000
--- FAIL: TestForceSystemdEnv (10.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (31.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-143000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-143000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-r7tzh" [cde65b9c-6ab1-4475-b29d-0859555ddde0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0904 12:44:08.798633    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-connect-65d86f57f4-r7tzh" [cde65b9c-6ab1-4475-b29d-0859555ddde0] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.008655166s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:32410
functional_test.go:1661: error fetching http://192.168.105.4:32410: Get "http://192.168.105.4:32410": dial tcp 192.168.105.4:32410: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32410: Get "http://192.168.105.4:32410": dial tcp 192.168.105.4:32410: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32410: Get "http://192.168.105.4:32410": dial tcp 192.168.105.4:32410: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32410: Get "http://192.168.105.4:32410": dial tcp 192.168.105.4:32410: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32410: Get "http://192.168.105.4:32410": dial tcp 192.168.105.4:32410: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32410: Get "http://192.168.105.4:32410": dial tcp 192.168.105.4:32410: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32410: Get "http://192.168.105.4:32410": dial tcp 192.168.105.4:32410: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:32410: Get "http://192.168.105.4:32410": dial tcp 192.168.105.4:32410: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-143000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-r7tzh
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-143000/192.168.105.4
Start Time:       Wed, 04 Sep 2024 12:44:07 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://795d717048d2c928aca581fa17bbf7df7e5cab76ca210f7141d3c1d9a68c07c5
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 04 Sep 2024 12:44:27 -0700
Finished:     Wed, 04 Sep 2024 12:44:27 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qlb52 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qlb52:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  30s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-r7tzh to functional-143000
Normal   Pulling    29s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     26s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.545s (3.545s including waiting). Image size: 84957542 bytes.
Normal   Created    10s (x3 over 25s)  kubelet            Created container echoserver-arm
Normal   Started    10s (x3 over 25s)  kubelet            Started container echoserver-arm
Normal   Pulled     10s (x2 over 25s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    9s (x3 over 24s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-r7tzh_default(cde65b9c-6ab1-4475-b29d-0859555ddde0)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-143000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-143000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.29.242
IPs:                      10.100.29.242
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32410/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-143000 -n functional-143000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount     | -p functional-143000                                                                                                 | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port139397152/001:/mount-9p       |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-143000 ssh findmnt                                                                                        | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-143000 ssh findmnt                                                                                        | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT | 04 Sep 24 12:44 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-143000 ssh -- ls                                                                                          | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT | 04 Sep 24 12:44 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-143000 ssh cat                                                                                            | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT | 04 Sep 24 12:44 PDT |
	|           | /mount-9p/test-1725479069190839000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-143000 ssh stat                                                                                           | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT | 04 Sep 24 12:44 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-143000 ssh stat                                                                                           | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT | 04 Sep 24 12:44 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-143000 ssh sudo                                                                                           | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT | 04 Sep 24 12:44 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-143000 ssh findmnt                                                                                        | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-143000                                                                                                 | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2654515076/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-143000 ssh findmnt                                                                                        | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT | 04 Sep 24 12:44 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-143000 ssh -- ls                                                                                          | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT | 04 Sep 24 12:44 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-143000 ssh sudo                                                                                           | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-143000                                                                                                 | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup651616618/001:/mount1    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-143000                                                                                                 | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup651616618/001:/mount2    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-143000                                                                                                 | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup651616618/001:/mount3    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-143000 ssh findmnt                                                                                        | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-143000 ssh findmnt                                                                                        | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT | 04 Sep 24 12:44 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-143000 ssh findmnt                                                                                        | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT | 04 Sep 24 12:44 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-143000 ssh findmnt                                                                                        | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT | 04 Sep 24 12:44 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-143000                                                                                                 | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-143000                                                                                                 | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-143000                                                                                                 | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-143000 --dry-run                                                                                       | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-143000 | jenkins | v1.34.0 | 04 Sep 24 12:44 PDT |                     |
	|           | -p functional-143000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/04 12:44:37
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 12:44:37.847694    2697 out.go:345] Setting OutFile to fd 1 ...
	I0904 12:44:37.847846    2697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:44:37.847849    2697 out.go:358] Setting ErrFile to fd 2...
	I0904 12:44:37.847851    2697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:44:37.847985    2697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 12:44:37.849624    2697 out.go:352] Setting JSON to false
	I0904 12:44:37.867916    2697 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2641,"bootTime":1725476436,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 12:44:37.868028    2697 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 12:44:37.871665    2697 out.go:177] * [functional-143000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 12:44:37.878742    2697 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 12:44:37.878754    2697 notify.go:220] Checking for updates...
	I0904 12:44:37.885647    2697 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 12:44:37.888748    2697 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 12:44:37.891630    2697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 12:44:37.894703    2697 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 12:44:37.897668    2697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 12:44:37.899031    2697 config.go:182] Loaded profile config "functional-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 12:44:37.899310    2697 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 12:44:37.903666    2697 out.go:177] * Using the qemu2 driver based on existing profile
	I0904 12:44:37.910528    2697 start.go:297] selected driver: qemu2
	I0904 12:44:37.910537    2697 start.go:901] validating driver "qemu2" against &{Name:functional-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-143000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 12:44:37.910582    2697 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 12:44:37.912884    2697 cni.go:84] Creating CNI manager for ""
	I0904 12:44:37.912900    2697 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 12:44:37.912956    2697 start.go:340] cluster config:
	{Name:functional-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 12:44:37.925659    2697 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 04 19:44:31 functional-143000 dockerd[6021]: time="2024-09-04T19:44:31.453762768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 04 19:44:31 functional-143000 dockerd[6021]: time="2024-09-04T19:44:31.453961092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 04 19:44:31 functional-143000 dockerd[6021]: time="2024-09-04T19:44:31.454082003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 04 19:44:31 functional-143000 cri-dockerd[6269]: time="2024-09-04T19:44:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b0ac6d5bbd802c3475042d3066e193ac2b76e7426cefb1813b4ba1bed79f11f6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 04 19:44:32 functional-143000 cri-dockerd[6269]: time="2024-09-04T19:44:32Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Sep 04 19:44:32 functional-143000 dockerd[6021]: time="2024-09-04T19:44:32.885783243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 04 19:44:32 functional-143000 dockerd[6021]: time="2024-09-04T19:44:32.885817741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 04 19:44:32 functional-143000 dockerd[6021]: time="2024-09-04T19:44:32.885826574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 04 19:44:32 functional-143000 dockerd[6021]: time="2024-09-04T19:44:32.886254054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 04 19:44:32 functional-143000 dockerd[6015]: time="2024-09-04T19:44:32.920017833Z" level=info msg="ignoring event" container=119055ffc614b5f05bc65da124951ec398ee427044cde90138cfe996c12f9496 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 04 19:44:32 functional-143000 dockerd[6021]: time="2024-09-04T19:44:32.920220948Z" level=info msg="shim disconnected" id=119055ffc614b5f05bc65da124951ec398ee427044cde90138cfe996c12f9496 namespace=moby
	Sep 04 19:44:32 functional-143000 dockerd[6021]: time="2024-09-04T19:44:32.920251155Z" level=warning msg="cleaning up after shim disconnected" id=119055ffc614b5f05bc65da124951ec398ee427044cde90138cfe996c12f9496 namespace=moby
	Sep 04 19:44:32 functional-143000 dockerd[6021]: time="2024-09-04T19:44:32.920255280Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 04 19:44:34 functional-143000 dockerd[6021]: time="2024-09-04T19:44:34.121730710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 04 19:44:34 functional-143000 dockerd[6021]: time="2024-09-04T19:44:34.121788374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 04 19:44:34 functional-143000 dockerd[6021]: time="2024-09-04T19:44:34.121797165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 04 19:44:34 functional-143000 dockerd[6021]: time="2024-09-04T19:44:34.121836663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 04 19:44:34 functional-143000 dockerd[6015]: time="2024-09-04T19:44:34.150554395Z" level=info msg="ignoring event" container=9b1984edfdb9e8b57d50c2279465b41ff6708319454c64ee67c7be21eecb4dd9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 04 19:44:34 functional-143000 dockerd[6021]: time="2024-09-04T19:44:34.150853714Z" level=info msg="shim disconnected" id=9b1984edfdb9e8b57d50c2279465b41ff6708319454c64ee67c7be21eecb4dd9 namespace=moby
	Sep 04 19:44:34 functional-143000 dockerd[6021]: time="2024-09-04T19:44:34.150885004Z" level=warning msg="cleaning up after shim disconnected" id=9b1984edfdb9e8b57d50c2279465b41ff6708319454c64ee67c7be21eecb4dd9 namespace=moby
	Sep 04 19:44:34 functional-143000 dockerd[6021]: time="2024-09-04T19:44:34.150889170Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 04 19:44:34 functional-143000 dockerd[6015]: time="2024-09-04T19:44:34.178964350Z" level=info msg="ignoring event" container=b0ac6d5bbd802c3475042d3066e193ac2b76e7426cefb1813b4ba1bed79f11f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 04 19:44:34 functional-143000 dockerd[6021]: time="2024-09-04T19:44:34.179138550Z" level=info msg="shim disconnected" id=b0ac6d5bbd802c3475042d3066e193ac2b76e7426cefb1813b4ba1bed79f11f6 namespace=moby
	Sep 04 19:44:34 functional-143000 dockerd[6021]: time="2024-09-04T19:44:34.179170215Z" level=warning msg="cleaning up after shim disconnected" id=b0ac6d5bbd802c3475042d3066e193ac2b76e7426cefb1813b4ba1bed79f11f6 namespace=moby
	Sep 04 19:44:34 functional-143000 dockerd[6021]: time="2024-09-04T19:44:34.179187381Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9b1984edfdb9e       72565bf5bbedf                                                                                         4 seconds ago        Exited              echoserver-arm            2                   266369a66eb38       hello-node-64b4f8f9ff-x4fnh
	119055ffc614b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 seconds ago        Exited              mount-munger              0                   b0ac6d5bbd802       busybox-mount
	795d717048d2c       72565bf5bbedf                                                                                         11 seconds ago       Exited              echoserver-arm            2                   23674dff7e873       hello-node-connect-65d86f57f4-r7tzh
	a2d366170ee1c       nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add                         24 seconds ago       Running             myfrontend                0                   dcb4d8b3ec7e6       sp-pod
	920f09d43d7a3       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                         39 seconds ago       Running             nginx                     0                   fa272a1ee7538       nginx-svc
	80873eb951bf1       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   005dd7484a918       coredns-6f6b679f8f-5qn5p
	003b60e8d0209       71d55d66fd4ee                                                                                         About a minute ago   Running             kube-proxy                2                   04c2d234f36cf       kube-proxy-4dsp8
	2b8e57e0a510d       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   45bbd982582db       storage-provisioner
	e7f707198ab0d       fbbbd428abb4d                                                                                         About a minute ago   Running             kube-scheduler            2                   2e6cfe8701102       kube-scheduler-functional-143000
	ec3445aa2b5f9       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   bd9fe645fea19       etcd-functional-143000
	690e93f97402e       fcb0683e6bdbd                                                                                         About a minute ago   Running             kube-controller-manager   2                   a74c2a4b43768       kube-controller-manager-functional-143000
	3b3bfd2a95b12       cd0f0ae0ec9e0                                                                                         About a minute ago   Running             kube-apiserver            0                   af716838465fa       kube-apiserver-functional-143000
	139989dffc04f       2437cf7621777                                                                                         About a minute ago   Exited              coredns                   1                   a8596eaa77698       coredns-6f6b679f8f-5qn5p
	496ee0afa53b3       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   fe7458ca6d6df       storage-provisioner
	468aaaf0d649e       71d55d66fd4ee                                                                                         About a minute ago   Exited              kube-proxy                1                   5c2a7bbf88df8       kube-proxy-4dsp8
	06bd59aa7c4c8       27e3830e14027                                                                                         About a minute ago   Exited              etcd                      1                   63e7e0a087a99       etcd-functional-143000
	3c0794799ebc6       fbbbd428abb4d                                                                                         About a minute ago   Exited              kube-scheduler            1                   2e1dd2415e3bb       kube-scheduler-functional-143000
	0ca8a8a934381       fcb0683e6bdbd                                                                                         About a minute ago   Exited              kube-controller-manager   1                   b7d303c86070c       kube-controller-manager-functional-143000
	
	
	==> coredns [139989dffc04] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37684 - 42256 "HINFO IN 5149652880064842449.561678928653281121. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.008993955s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [80873eb951bf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44654 - 3141 "HINFO IN 7565908254960965498.2452697154633047455. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009588689s
	[INFO] 10.244.0.1:13771 - 23400 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000090912s
	[INFO] 10.244.0.1:8399 - 53871 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000082121s
	[INFO] 10.244.0.1:24064 - 703 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.00163921s
	[INFO] 10.244.0.1:60629 - 1765 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000062663s
	[INFO] 10.244.0.1:18476 - 46753 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000081704s
	[INFO] 10.244.0.1:35696 - 27816 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000245071s
	
	
	==> describe nodes <==
	Name:               functional-143000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-143000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af
	                    minikube.k8s.io/name=functional-143000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_04T12_41_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Sep 2024 19:41:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-143000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Sep 2024 19:44:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Sep 2024 19:44:30 +0000   Wed, 04 Sep 2024 19:41:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Sep 2024 19:44:30 +0000   Wed, 04 Sep 2024 19:41:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Sep 2024 19:44:30 +0000   Wed, 04 Sep 2024 19:41:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Sep 2024 19:44:30 +0000   Wed, 04 Sep 2024 19:41:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-143000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 9a079bbc739c49cf975195a183203823
	  System UUID:                9a079bbc739c49cf975195a183203823
	  Boot ID:                    33f017b2-1da8-48f2-ae48-83be0d047939
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-x4fnh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s
	  default                     hello-node-connect-65d86f57f4-r7tzh          0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 coredns-6f6b679f8f-5qn5p                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m50s
	  kube-system                 etcd-functional-143000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m56s
	  kube-system                 kube-apiserver-functional-143000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-controller-manager-functional-143000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m56s
	  kube-system                 kube-proxy-4dsp8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 kube-scheduler-functional-143000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m56s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m49s                kube-proxy       
	  Normal  Starting                 67s                  kube-proxy       
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m56s                kubelet          Node functional-143000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m56s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m56s                kubelet          Node functional-143000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m56s                kubelet          Node functional-143000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m56s                kubelet          Starting kubelet.
	  Normal  NodeReady                2m52s                kubelet          Node functional-143000 status is now: NodeReady
	  Normal  RegisteredNode           2m51s                node-controller  Node functional-143000 event: Registered Node functional-143000 in Controller
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node functional-143000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node functional-143000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     115s (x7 over 115s)  kubelet          Node functional-143000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           109s                 node-controller  Node functional-143000 event: Registered Node functional-143000 in Controller
	  Normal  Starting                 71s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s (x8 over 71s)    kubelet          Node functional-143000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s (x8 over 71s)    kubelet          Node functional-143000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s (x7 over 71s)    kubelet          Node functional-143000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  71s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           66s                  node-controller  Node functional-143000 event: Registered Node functional-143000 in Controller
	
	
	==> dmesg <==
	[  +4.417974] kauditd_printk_skb: 199 callbacks suppressed
	[  +6.578097] kauditd_printk_skb: 33 callbacks suppressed
	[Sep 4 19:43] systemd-fstab-generator[5113]: Ignoring "noauto" option for root device
	[ +10.005207] systemd-fstab-generator[5538]: Ignoring "noauto" option for root device
	[  +0.052506] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.096497] systemd-fstab-generator[5572]: Ignoring "noauto" option for root device
	[  +0.096435] systemd-fstab-generator[5585]: Ignoring "noauto" option for root device
	[  +0.113630] systemd-fstab-generator[5598]: Ignoring "noauto" option for root device
	[  +5.119855] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.393587] systemd-fstab-generator[6222]: Ignoring "noauto" option for root device
	[  +0.080906] systemd-fstab-generator[6234]: Ignoring "noauto" option for root device
	[  +0.091040] systemd-fstab-generator[6246]: Ignoring "noauto" option for root device
	[  +0.100583] systemd-fstab-generator[6261]: Ignoring "noauto" option for root device
	[  +0.235077] systemd-fstab-generator[6428]: Ignoring "noauto" option for root device
	[  +0.915400] systemd-fstab-generator[6548]: Ignoring "noauto" option for root device
	[  +3.430690] kauditd_printk_skb: 199 callbacks suppressed
	[  +7.690845] kauditd_printk_skb: 33 callbacks suppressed
	[  +8.354254] systemd-fstab-generator[7583]: Ignoring "noauto" option for root device
	[  +5.373828] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.415388] kauditd_printk_skb: 29 callbacks suppressed
	[Sep 4 19:44] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.716994] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.943436] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.155103] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.777967] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [06bd59aa7c4c] <==
	{"level":"info","ts":"2024-09-04T19:42:45.833052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-04T19:42:45.833113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-04T19:42:45.833191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-04T19:42:45.833241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-04T19:42:45.833272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-04T19:42:45.833303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-04T19:42:45.838753Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-143000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-04T19:42:45.839048Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-04T19:42:45.839262Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-04T19:42:45.839221Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-04T19:42:45.839676Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-04T19:42:45.840875Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-04T19:42:45.842576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-04T19:42:45.840875Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-04T19:42:45.844183Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-04T19:43:13.091400Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-04T19:43:13.091428Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-143000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-04T19:43:13.091487Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-04T19:43:13.091497Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-04T19:43:13.092744Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-04T19:43:13.092784Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-04T19:43:13.111863Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-04T19:43:13.113306Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-04T19:43:13.113334Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-04T19:43:13.113339Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-143000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [ec3445aa2b5f] <==
	{"level":"info","ts":"2024-09-04T19:43:28.073496Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-09-04T19:43:28.073592Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-04T19:43:28.073636Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-04T19:43:28.075698Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-04T19:43:28.077335Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-04T19:43:28.079708Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-04T19:43:28.080130Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-04T19:43:28.080598Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-04T19:43:28.080631Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-04T19:43:29.367979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-04T19:43:29.368152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-04T19:43:29.368224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-04T19:43:29.368599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-04T19:43:29.368644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-04T19:43:29.368678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-04T19:43:29.368701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-04T19:43:29.373392Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-143000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-04T19:43:29.373547Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-04T19:43:29.374166Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-04T19:43:29.374219Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-04T19:43:29.374264Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-04T19:43:29.376244Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-04T19:43:29.376244Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-04T19:43:29.378563Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-04T19:43:29.379320Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> kernel <==
	 19:44:38 up 3 min,  0 users,  load average: 0.55, 0.38, 0.16
	Linux functional-143000 5.10.207 #1 SMP PREEMPT Tue Sep 3 18:23:52 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3b3bfd2a95b1] <==
	I0904 19:43:30.009219       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0904 19:43:30.009252       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0904 19:43:30.009315       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0904 19:43:30.009528       1 aggregator.go:171] initial CRD sync complete...
	I0904 19:43:30.009555       1 autoregister_controller.go:144] Starting autoregister controller
	I0904 19:43:30.009570       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0904 19:43:30.009588       1 cache.go:39] Caches are synced for autoregister controller
	I0904 19:43:30.011439       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0904 19:43:30.032848       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0904 19:43:30.882461       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0904 19:43:31.113519       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0904 19:43:31.117229       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0904 19:43:31.142786       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0904 19:43:31.154115       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0904 19:43:31.163951       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0904 19:43:32.651573       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0904 19:43:32.700214       1 controller.go:615] quota admission added evaluator for: endpoints
	I0904 19:43:51.433162       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.45.172"}
	I0904 19:43:56.201373       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.193.119"}
	I0904 19:44:07.571755       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0904 19:44:07.613102       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.29.242"}
	I0904 19:44:21.961297       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.20.141"}
	I0904 19:44:38.429344       1 controller.go:615] quota admission added evaluator for: namespaces
	I0904 19:44:38.573139       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.190.247"}
	I0904 19:44:38.597495       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.66.43"}
	
	
	==> kube-controller-manager [0ca8a8a93438] <==
	I0904 19:42:49.861302       1 shared_informer.go:320] Caches are synced for GC
	I0904 19:42:49.863060       1 shared_informer.go:320] Caches are synced for node
	I0904 19:42:49.863097       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0904 19:42:49.863110       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0904 19:42:49.863122       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0904 19:42:49.863126       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0904 19:42:49.863169       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-143000"
	I0904 19:42:49.865491       1 shared_informer.go:320] Caches are synced for daemon sets
	I0904 19:42:49.870938       1 shared_informer.go:320] Caches are synced for attach detach
	I0904 19:42:49.877142       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0904 19:42:49.887749       1 shared_informer.go:320] Caches are synced for PV protection
	I0904 19:42:49.889915       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0904 19:42:49.905783       1 shared_informer.go:320] Caches are synced for taint
	I0904 19:42:49.905901       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0904 19:42:49.905957       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-143000"
	I0904 19:42:49.906048       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0904 19:42:49.911160       1 shared_informer.go:320] Caches are synced for persistent volume
	I0904 19:42:49.911249       1 shared_informer.go:320] Caches are synced for TTL
	I0904 19:42:49.970513       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="261.585554ms"
	I0904 19:42:49.971344       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="597.722µs"
	I0904 19:42:50.322374       1 shared_informer.go:320] Caches are synced for garbage collector
	I0904 19:42:50.409902       1 shared_informer.go:320] Caches are synced for garbage collector
	I0904 19:42:50.409978       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0904 19:42:54.124851       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="9.809912ms"
	I0904 19:42:54.125957       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="34.498µs"
	
	
	==> kube-controller-manager [690e93f97402] <==
	I0904 19:44:30.743355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-143000"
	I0904 19:44:34.085151       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="48.79µs"
	I0904 19:44:35.107719       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="55.539µs"
	I0904 19:44:38.483304       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="27.006984ms"
	E0904 19:44:38.483636       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0904 19:44:38.487349       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="19.925691ms"
	E0904 19:44:38.487623       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0904 19:44:38.492377       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.43436ms"
	E0904 19:44:38.492400       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0904 19:44:38.492421       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="3.734242ms"
	E0904 19:44:38.492427       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0904 19:44:38.500032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.387284ms"
	E0904 19:44:38.500050       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0904 19:44:38.500166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.452155ms"
	E0904 19:44:38.500198       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0904 19:44:38.506330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="1.931493ms"
	E0904 19:44:38.506346       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0904 19:44:38.543052       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="20.370878ms"
	I0904 19:44:38.548543       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="5.464202ms"
	I0904 19:44:38.551116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="20.166µs"
	I0904 19:44:38.554434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="17.374µs"
	I0904 19:44:38.562440       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="14.143753ms"
	I0904 19:44:38.573440       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.974194ms"
	I0904 19:44:38.590058       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="16.589097ms"
	I0904 19:44:38.590094       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="20.624µs"
	
	
	==> kube-proxy [003b60e8d020] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0904 19:43:30.606665       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0904 19:43:30.611452       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0904 19:43:30.611484       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 19:43:30.619735       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0904 19:43:30.619751       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0904 19:43:30.619763       1 server_linux.go:169] "Using iptables Proxier"
	I0904 19:43:30.620344       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 19:43:30.620430       1 server.go:483] "Version info" version="v1.31.0"
	I0904 19:43:30.620444       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 19:43:30.620880       1 config.go:197] "Starting service config controller"
	I0904 19:43:30.620894       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0904 19:43:30.620902       1 config.go:104] "Starting endpoint slice config controller"
	I0904 19:43:30.620905       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0904 19:43:30.621076       1 config.go:326] "Starting node config controller"
	I0904 19:43:30.621079       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0904 19:43:30.721856       1 shared_informer.go:320] Caches are synced for node config
	I0904 19:43:30.721857       1 shared_informer.go:320] Caches are synced for service config
	I0904 19:43:30.721874       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [468aaaf0d649] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0904 19:42:47.803938       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0904 19:42:47.810451       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0904 19:42:47.810479       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 19:42:47.825180       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0904 19:42:47.825200       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0904 19:42:47.825216       1 server_linux.go:169] "Using iptables Proxier"
	I0904 19:42:47.826294       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 19:42:47.826456       1 server.go:483] "Version info" version="v1.31.0"
	I0904 19:42:47.826467       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 19:42:47.826938       1 config.go:197] "Starting service config controller"
	I0904 19:42:47.826954       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0904 19:42:47.826963       1 config.go:104] "Starting endpoint slice config controller"
	I0904 19:42:47.826967       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0904 19:42:47.827167       1 config.go:326] "Starting node config controller"
	I0904 19:42:47.827170       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0904 19:42:47.928685       1 shared_informer.go:320] Caches are synced for node config
	I0904 19:42:47.928688       1 shared_informer.go:320] Caches are synced for service config
	I0904 19:42:47.928722       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3c0794799ebc] <==
	I0904 19:42:44.501160       1 serving.go:386] Generated self-signed cert in-memory
	W0904 19:42:46.357103       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 19:42:46.357201       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 19:42:46.357238       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 19:42:46.357259       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 19:42:46.389450       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0904 19:42:46.389468       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 19:42:46.390791       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0904 19:42:46.390871       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 19:42:46.390885       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0904 19:42:46.390920       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 19:42:46.492361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0904 19:43:13.101264       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0904 19:43:13.101316       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0904 19:43:13.101379       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e7f707198ab0] <==
	I0904 19:43:28.279305       1 serving.go:386] Generated self-signed cert in-memory
	W0904 19:43:29.910225       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 19:43:29.910242       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 19:43:29.910247       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 19:43:29.910250       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 19:43:29.927899       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0904 19:43:29.927917       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 19:43:29.929388       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0904 19:43:29.929753       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 19:43:29.929770       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0904 19:43:29.929777       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 19:43:30.030035       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 04 19:44:27 functional-143000 kubelet[6555]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 04 19:44:27 functional-143000 kubelet[6555]: I0904 19:44:27.058304    6555 scope.go:117] "RemoveContainer" containerID="b2fed9253e91176d92ecaf3aa3c33e49bd67941ed6992ab16c57803dc8d8bed3"
	Sep 04 19:44:27 functional-143000 kubelet[6555]: I0904 19:44:27.126092    6555 scope.go:117] "RemoveContainer" containerID="431c68753123630d4b3c6477b51b129bae9f9eefe80b5df148b0ab857a8d1e70"
	Sep 04 19:44:27 functional-143000 kubelet[6555]: I0904 19:44:27.132998    6555 scope.go:117] "RemoveContainer" containerID="b2fed9253e91176d92ecaf3aa3c33e49bd67941ed6992ab16c57803dc8d8bed3"
	Sep 04 19:44:28 functional-143000 kubelet[6555]: I0904 19:44:28.014076    6555 scope.go:117] "RemoveContainer" containerID="795d717048d2c928aca581fa17bbf7df7e5cab76ca210f7141d3c1d9a68c07c5"
	Sep 04 19:44:28 functional-143000 kubelet[6555]: E0904 19:44:28.014187    6555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-r7tzh_default(cde65b9c-6ab1-4475-b29d-0859555ddde0)\"" pod="default/hello-node-connect-65d86f57f4-r7tzh" podUID="cde65b9c-6ab1-4475-b29d-0859555ddde0"
	Sep 04 19:44:31 functional-143000 kubelet[6555]: I0904 19:44:31.292108    6555 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9vqf\" (UniqueName: \"kubernetes.io/projected/b93d6ab4-d69e-431c-ab21-e3a1b54273a9-kube-api-access-c9vqf\") pod \"busybox-mount\" (UID: \"b93d6ab4-d69e-431c-ab21-e3a1b54273a9\") " pod="default/busybox-mount"
	Sep 04 19:44:31 functional-143000 kubelet[6555]: I0904 19:44:31.292164    6555 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/b93d6ab4-d69e-431c-ab21-e3a1b54273a9-test-volume\") pod \"busybox-mount\" (UID: \"b93d6ab4-d69e-431c-ab21-e3a1b54273a9\") " pod="default/busybox-mount"
	Sep 04 19:44:34 functional-143000 kubelet[6555]: I0904 19:44:34.048936    6555 scope.go:117] "RemoveContainer" containerID="94d6776421b087c5db5a1809481750246227cbba4211e7cc76b7fd106b465ffc"
	Sep 04 19:44:34 functional-143000 kubelet[6555]: I0904 19:44:34.317186    6555 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/b93d6ab4-d69e-431c-ab21-e3a1b54273a9-test-volume\") pod \"b93d6ab4-d69e-431c-ab21-e3a1b54273a9\" (UID: \"b93d6ab4-d69e-431c-ab21-e3a1b54273a9\") "
	Sep 04 19:44:34 functional-143000 kubelet[6555]: I0904 19:44:34.317221    6555 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9vqf\" (UniqueName: \"kubernetes.io/projected/b93d6ab4-d69e-431c-ab21-e3a1b54273a9-kube-api-access-c9vqf\") pod \"b93d6ab4-d69e-431c-ab21-e3a1b54273a9\" (UID: \"b93d6ab4-d69e-431c-ab21-e3a1b54273a9\") "
	Sep 04 19:44:34 functional-143000 kubelet[6555]: I0904 19:44:34.317425    6555 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b93d6ab4-d69e-431c-ab21-e3a1b54273a9-test-volume" (OuterVolumeSpecName: "test-volume") pod "b93d6ab4-d69e-431c-ab21-e3a1b54273a9" (UID: "b93d6ab4-d69e-431c-ab21-e3a1b54273a9"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 04 19:44:34 functional-143000 kubelet[6555]: I0904 19:44:34.317986    6555 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b93d6ab4-d69e-431c-ab21-e3a1b54273a9-kube-api-access-c9vqf" (OuterVolumeSpecName: "kube-api-access-c9vqf") pod "b93d6ab4-d69e-431c-ab21-e3a1b54273a9" (UID: "b93d6ab4-d69e-431c-ab21-e3a1b54273a9"). InnerVolumeSpecName "kube-api-access-c9vqf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 04 19:44:34 functional-143000 kubelet[6555]: I0904 19:44:34.418261    6555 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/b93d6ab4-d69e-431c-ab21-e3a1b54273a9-test-volume\") on node \"functional-143000\" DevicePath \"\""
	Sep 04 19:44:34 functional-143000 kubelet[6555]: I0904 19:44:34.418296    6555 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-c9vqf\" (UniqueName: \"kubernetes.io/projected/b93d6ab4-d69e-431c-ab21-e3a1b54273a9-kube-api-access-c9vqf\") on node \"functional-143000\" DevicePath \"\""
	Sep 04 19:44:35 functional-143000 kubelet[6555]: I0904 19:44:35.097134    6555 scope.go:117] "RemoveContainer" containerID="94d6776421b087c5db5a1809481750246227cbba4211e7cc76b7fd106b465ffc"
	Sep 04 19:44:35 functional-143000 kubelet[6555]: I0904 19:44:35.097651    6555 scope.go:117] "RemoveContainer" containerID="9b1984edfdb9e8b57d50c2279465b41ff6708319454c64ee67c7be21eecb4dd9"
	Sep 04 19:44:35 functional-143000 kubelet[6555]: E0904 19:44:35.097916    6555 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-x4fnh_default(18033e89-e3c0-4ea2-9ec5-0b766b4306ff)\"" pod="default/hello-node-64b4f8f9ff-x4fnh" podUID="18033e89-e3c0-4ea2-9ec5-0b766b4306ff"
	Sep 04 19:44:35 functional-143000 kubelet[6555]: I0904 19:44:35.111478    6555 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0ac6d5bbd802c3475042d3066e193ac2b76e7426cefb1813b4ba1bed79f11f6"
	Sep 04 19:44:38 functional-143000 kubelet[6555]: E0904 19:44:38.537873    6555 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b93d6ab4-d69e-431c-ab21-e3a1b54273a9" containerName="mount-munger"
	Sep 04 19:44:38 functional-143000 kubelet[6555]: I0904 19:44:38.537898    6555 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93d6ab4-d69e-431c-ab21-e3a1b54273a9" containerName="mount-munger"
	Sep 04 19:44:38 functional-143000 kubelet[6555]: I0904 19:44:38.559035    6555 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9a20288b-f6e0-4f01-87c4-4022419eb14a-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-vwchb\" (UID: \"9a20288b-f6e0-4f01-87c4-4022419eb14a\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-vwchb"
	Sep 04 19:44:38 functional-143000 kubelet[6555]: I0904 19:44:38.559128    6555 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w77jm\" (UniqueName: \"kubernetes.io/projected/9a20288b-f6e0-4f01-87c4-4022419eb14a-kube-api-access-w77jm\") pod \"kubernetes-dashboard-695b96c756-vwchb\" (UID: \"9a20288b-f6e0-4f01-87c4-4022419eb14a\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-vwchb"
	Sep 04 19:44:38 functional-143000 kubelet[6555]: I0904 19:44:38.760298    6555 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0288ef48-f80c-4d22-84d7-f082a3420033-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-mqz9p\" (UID: \"0288ef48-f80c-4d22-84d7-f082a3420033\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-mqz9p"
	Sep 04 19:44:38 functional-143000 kubelet[6555]: I0904 19:44:38.760328    6555 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qtjp\" (UniqueName: \"kubernetes.io/projected/0288ef48-f80c-4d22-84d7-f082a3420033-kube-api-access-6qtjp\") pod \"dashboard-metrics-scraper-c5db448b4-mqz9p\" (UID: \"0288ef48-f80c-4d22-84d7-f082a3420033\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-mqz9p"
	
	
	==> storage-provisioner [2b8e57e0a510] <==
	I0904 19:43:30.560167       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0904 19:43:30.565870       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0904 19:43:30.565892       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0904 19:43:47.978786       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0904 19:43:47.979951       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-143000_21cf5ce1-dc7f-4dfb-9a4a-5dc3c3166f97!
	I0904 19:43:47.981621       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"23410317-2675-49af-9e2f-1144e1fa7087", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-143000_21cf5ce1-dc7f-4dfb-9a4a-5dc3c3166f97 became leader
	I0904 19:43:48.080161       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-143000_21cf5ce1-dc7f-4dfb-9a4a-5dc3c3166f97!
	I0904 19:44:01.089579       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0904 19:44:01.089781       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"8a01514f-c5dd-420f-aa55-8cd29cd6df65", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0904 19:44:01.089654       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    91ab10db-1f04-46e9-9831-62f6aba429b9 339 0 2024-09-04 19:41:48 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-04 19:41:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-8a01514f-c5dd-420f-aa55-8cd29cd6df65 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  8a01514f-c5dd-420f-aa55-8cd29cd6df65 692 0 2024-09-04 19:44:01 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-04 19:44:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-04 19:44:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0904 19:44:01.090128       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-8a01514f-c5dd-420f-aa55-8cd29cd6df65" provisioned
	I0904 19:44:01.090148       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0904 19:44:01.090155       1 volume_store.go:212] Trying to save persistentvolume "pvc-8a01514f-c5dd-420f-aa55-8cd29cd6df65"
	I0904 19:44:01.096850       1 volume_store.go:219] persistentvolume "pvc-8a01514f-c5dd-420f-aa55-8cd29cd6df65" saved
	I0904 19:44:01.097330       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"8a01514f-c5dd-420f-aa55-8cd29cd6df65", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-8a01514f-c5dd-420f-aa55-8cd29cd6df65
	
	
	==> storage-provisioner [496ee0afa53b] <==
	I0904 19:42:47.731517       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0904 19:42:47.739984       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0904 19:42:47.740000       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0904 19:43:05.142670       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0904 19:43:05.142752       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-143000_1d203bac-4285-4131-8e46-b5a8987ac85e!
	I0904 19:43:05.142900       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"23410317-2675-49af-9e2f-1144e1fa7087", APIVersion:"v1", ResourceVersion:"528", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-143000_1d203bac-4285-4131-8e46-b5a8987ac85e became leader
	I0904 19:43:05.243488       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-143000_1d203bac-4285-4131-8e46-b5a8987ac85e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-143000 -n functional-143000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-143000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-c5db448b4-mqz9p kubernetes-dashboard-695b96c756-vwchb
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-143000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-mqz9p kubernetes-dashboard-695b96c756-vwchb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-143000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-mqz9p kubernetes-dashboard-695b96c756-vwchb: exit status 1 (42.549042ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-143000/192.168.105.4
	Start Time:       Wed, 04 Sep 2024 12:44:31 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://119055ffc614b5f05bc65da124951ec398ee427044cde90138cfe996c12f9496
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 04 Sep 2024 12:44:32 -0700
	      Finished:     Wed, 04 Sep 2024 12:44:32 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c9vqf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-c9vqf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  7s    default-scheduler  Successfully assigned default/busybox-mount to functional-143000
	  Normal  Pulling    7s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.341s (1.341s including waiting). Image size: 3547125 bytes.
	  Normal  Created    6s    kubelet            Created container mount-munger
	  Normal  Started    6s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-c5db448b4-mqz9p" not found
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-vwchb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-143000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-mqz9p kubernetes-dashboard-695b96c756-vwchb: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (31.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2288: (dbg) Non-zero exit: out/minikube-darwin-arm64 license: exit status 40 (140.016333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2289: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 node stop m02 -v=7 --alsologtostderr
E0904 12:48:55.525785    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:48:55.880803    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:48:55.888384    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:48:55.901751    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:48:55.925119    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:48:55.968476    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:48:56.051855    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:48:56.215211    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:48:56.538579    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:48:57.181984    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:48:58.465458    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:49:01.028909    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:49:06.150911    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-789000 node stop m02 -v=7 --alsologtostderr: (12.192014958s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr
E0904 12:49:16.394053    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:49:36.876684    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:50:17.839314    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:51:39.761097    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr: exit status 7 (2m56.030865166s)

                                                
                                                
-- stdout --
	ha-789000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-789000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-789000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-789000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 12:49:06.212471    3065 out.go:345] Setting OutFile to fd 1 ...
	I0904 12:49:06.212840    3065 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:49:06.212844    3065 out.go:358] Setting ErrFile to fd 2...
	I0904 12:49:06.212847    3065 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:49:06.213020    3065 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 12:49:06.213176    3065 out.go:352] Setting JSON to false
	I0904 12:49:06.213187    3065 mustload.go:65] Loading cluster: ha-789000
	I0904 12:49:06.213223    3065 notify.go:220] Checking for updates...
	I0904 12:49:06.213440    3065 config.go:182] Loaded profile config "ha-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 12:49:06.213448    3065 status.go:255] checking status of ha-789000 ...
	I0904 12:49:06.214212    3065 status.go:330] ha-789000 host status = "Running" (err=<nil>)
	I0904 12:49:06.214220    3065 host.go:66] Checking if "ha-789000" exists ...
	I0904 12:49:06.214315    3065 host.go:66] Checking if "ha-789000" exists ...
	I0904 12:49:06.214427    3065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 12:49:06.214436    3065 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/id_rsa Username:docker}
	W0904 12:49:32.185664    3065 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0904 12:49:32.185807    3065 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0904 12:49:32.185826    3065 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0904 12:49:32.185839    3065 status.go:257] ha-789000 status: &{Name:ha-789000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0904 12:49:32.185858    3065 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0904 12:49:32.185871    3065 status.go:255] checking status of ha-789000-m02 ...
	I0904 12:49:32.186269    3065 status.go:330] ha-789000-m02 host status = "Stopped" (err=<nil>)
	I0904 12:49:32.186279    3065 status.go:343] host is not running, skipping remaining checks
	I0904 12:49:32.186284    3065 status.go:257] ha-789000-m02 status: &{Name:ha-789000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 12:49:32.186295    3065 status.go:255] checking status of ha-789000-m03 ...
	I0904 12:49:32.187441    3065 status.go:330] ha-789000-m03 host status = "Running" (err=<nil>)
	I0904 12:49:32.187472    3065 host.go:66] Checking if "ha-789000-m03" exists ...
	I0904 12:49:32.187701    3065 host.go:66] Checking if "ha-789000-m03" exists ...
	I0904 12:49:32.187942    3065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 12:49:32.187956    3065 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m03/id_rsa Username:docker}
	W0904 12:50:47.188255    3065 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0904 12:50:47.188306    3065 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0904 12:50:47.188329    3065 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0904 12:50:47.188333    3065 status.go:257] ha-789000-m03 status: &{Name:ha-789000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0904 12:50:47.188343    3065 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0904 12:50:47.188346    3065 status.go:255] checking status of ha-789000-m04 ...
	I0904 12:50:47.189050    3065 status.go:330] ha-789000-m04 host status = "Running" (err=<nil>)
	I0904 12:50:47.189059    3065 host.go:66] Checking if "ha-789000-m04" exists ...
	I0904 12:50:47.189167    3065 host.go:66] Checking if "ha-789000-m04" exists ...
	I0904 12:50:47.189294    3065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 12:50:47.189300    3065 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m04/id_rsa Username:docker}
	W0904 12:52:02.190170    3065 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0904 12:52:02.190232    3065 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0904 12:52:02.190242    3065 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0904 12:52:02.190247    3065 status.go:257] ha-789000-m04 status: &{Name:ha-789000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0904 12:52:02.190259    3065 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr": ha-789000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-789000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-789000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-789000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr": ha-789000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-789000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-789000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-789000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr": ha-789000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-789000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-789000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-789000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000: exit status 3 (25.959588125s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0904 12:52:28.149777    3088 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0904 12:52:28.149791    3088 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-789000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0904 12:53:27.790510    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.534011708s)
ha_test.go:413: expected profile "ha-789000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-789000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-789000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"Disable
DriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-789000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"
docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metric
s-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOpti
ons\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000
E0904 12:53:55.875410    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000: exit status 3 (25.963632083s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0904 12:54:11.645042    3106 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0904 12:54:11.645068    3106 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-789000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (209.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-789000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.112865042s)

                                                
                                                
-- stdout --
	* Starting "ha-789000-m02" control-plane node in "ha-789000" cluster
	* Restarting existing qemu2 VM for "ha-789000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-789000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 12:54:11.693620    3109 out.go:345] Setting OutFile to fd 1 ...
	I0904 12:54:11.693885    3109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:54:11.693890    3109 out.go:358] Setting ErrFile to fd 2...
	I0904 12:54:11.693892    3109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:54:11.694054    3109 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 12:54:11.694351    3109 mustload.go:65] Loading cluster: ha-789000
	I0904 12:54:11.694640    3109 config.go:182] Loaded profile config "ha-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0904 12:54:11.694890    3109 host.go:58] "ha-789000-m02" host status: Stopped
	I0904 12:54:11.699395    3109 out.go:177] * Starting "ha-789000-m02" control-plane node in "ha-789000" cluster
	I0904 12:54:11.704366    3109 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 12:54:11.704380    3109 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 12:54:11.704390    3109 cache.go:56] Caching tarball of preloaded images
	I0904 12:54:11.704494    3109 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 12:54:11.704516    3109 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 12:54:11.704603    3109 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/ha-789000/config.json ...
	I0904 12:54:11.705138    3109 start.go:360] acquireMachinesLock for ha-789000-m02: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 12:54:11.705193    3109 start.go:364] duration metric: took 31.75µs to acquireMachinesLock for "ha-789000-m02"
	I0904 12:54:11.705202    3109 start.go:96] Skipping create...Using existing machine configuration
	I0904 12:54:11.705207    3109 fix.go:54] fixHost starting: m02
	I0904 12:54:11.705357    3109 fix.go:112] recreateIfNeeded on ha-789000-m02: state=Stopped err=<nil>
	W0904 12:54:11.705363    3109 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 12:54:11.709384    3109 out.go:177] * Restarting existing qemu2 VM for "ha-789000-m02" ...
	I0904 12:54:11.713395    3109 qemu.go:418] Using hvf for hardware acceleration
	I0904 12:54:11.713452    3109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:0d:d2:31:6b:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m02/disk.qcow2
	I0904 12:54:11.716504    3109 main.go:141] libmachine: STDOUT: 
	I0904 12:54:11.716526    3109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 12:54:11.716555    3109 fix.go:56] duration metric: took 11.348292ms for fixHost
	I0904 12:54:11.716561    3109 start.go:83] releasing machines lock for "ha-789000-m02", held for 11.364041ms
	W0904 12:54:11.716568    3109 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 12:54:11.716607    3109 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 12:54:11.716612    3109 start.go:729] Will try again in 5 seconds ...
	I0904 12:54:16.717751    3109 start.go:360] acquireMachinesLock for ha-789000-m02: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 12:54:16.718292    3109 start.go:364] duration metric: took 430.75µs to acquireMachinesLock for "ha-789000-m02"
	I0904 12:54:16.718440    3109 start.go:96] Skipping create...Using existing machine configuration
	I0904 12:54:16.718456    3109 fix.go:54] fixHost starting: m02
	I0904 12:54:16.719052    3109 fix.go:112] recreateIfNeeded on ha-789000-m02: state=Stopped err=<nil>
	W0904 12:54:16.719073    3109 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 12:54:16.723217    3109 out.go:177] * Restarting existing qemu2 VM for "ha-789000-m02" ...
	I0904 12:54:16.727205    3109 qemu.go:418] Using hvf for hardware acceleration
	I0904 12:54:16.727371    3109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:0d:d2:31:6b:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m02/disk.qcow2
	I0904 12:54:16.735053    3109 main.go:141] libmachine: STDOUT: 
	I0904 12:54:16.735115    3109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 12:54:16.735195    3109 fix.go:56] duration metric: took 16.741416ms for fixHost
	I0904 12:54:16.735210    3109 start.go:83] releasing machines lock for "ha-789000-m02", held for 16.900209ms
	W0904 12:54:16.735342    3109 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-789000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-789000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 12:54:16.740112    3109 out.go:201] 
	W0904 12:54:16.744210    3109 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 12:54:16.744228    3109 out.go:270] * 
	* 
	W0904 12:54:16.749963    3109 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 12:54:16.754089    3109 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0904 12:54:11.693620    3109 out.go:345] Setting OutFile to fd 1 ...
I0904 12:54:11.693885    3109 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 12:54:11.693890    3109 out.go:358] Setting ErrFile to fd 2...
I0904 12:54:11.693892    3109 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 12:54:11.694054    3109 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
I0904 12:54:11.694351    3109 mustload.go:65] Loading cluster: ha-789000
I0904 12:54:11.694640    3109 config.go:182] Loaded profile config "ha-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
W0904 12:54:11.694890    3109 host.go:58] "ha-789000-m02" host status: Stopped
I0904 12:54:11.699395    3109 out.go:177] * Starting "ha-789000-m02" control-plane node in "ha-789000" cluster
I0904 12:54:11.704366    3109 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0904 12:54:11.704380    3109 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0904 12:54:11.704390    3109 cache.go:56] Caching tarball of preloaded images
I0904 12:54:11.704494    3109 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0904 12:54:11.704516    3109 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0904 12:54:11.704603    3109 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/ha-789000/config.json ...
I0904 12:54:11.705138    3109 start.go:360] acquireMachinesLock for ha-789000-m02: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0904 12:54:11.705193    3109 start.go:364] duration metric: took 31.75µs to acquireMachinesLock for "ha-789000-m02"
I0904 12:54:11.705202    3109 start.go:96] Skipping create...Using existing machine configuration
I0904 12:54:11.705207    3109 fix.go:54] fixHost starting: m02
I0904 12:54:11.705357    3109 fix.go:112] recreateIfNeeded on ha-789000-m02: state=Stopped err=<nil>
W0904 12:54:11.705363    3109 fix.go:138] unexpected machine state, will restart: <nil>
I0904 12:54:11.709384    3109 out.go:177] * Restarting existing qemu2 VM for "ha-789000-m02" ...
I0904 12:54:11.713395    3109 qemu.go:418] Using hvf for hardware acceleration
I0904 12:54:11.713452    3109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:0d:d2:31:6b:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m02/disk.qcow2
I0904 12:54:11.716504    3109 main.go:141] libmachine: STDOUT: 
I0904 12:54:11.716526    3109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0904 12:54:11.716555    3109 fix.go:56] duration metric: took 11.348292ms for fixHost
I0904 12:54:11.716561    3109 start.go:83] releasing machines lock for "ha-789000-m02", held for 11.364041ms
W0904 12:54:11.716568    3109 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0904 12:54:11.716607    3109 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0904 12:54:11.716612    3109 start.go:729] Will try again in 5 seconds ...
I0904 12:54:16.717751    3109 start.go:360] acquireMachinesLock for ha-789000-m02: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0904 12:54:16.718292    3109 start.go:364] duration metric: took 430.75µs to acquireMachinesLock for "ha-789000-m02"
I0904 12:54:16.718440    3109 start.go:96] Skipping create...Using existing machine configuration
I0904 12:54:16.718456    3109 fix.go:54] fixHost starting: m02
I0904 12:54:16.719052    3109 fix.go:112] recreateIfNeeded on ha-789000-m02: state=Stopped err=<nil>
W0904 12:54:16.719073    3109 fix.go:138] unexpected machine state, will restart: <nil>
I0904 12:54:16.723217    3109 out.go:177] * Restarting existing qemu2 VM for "ha-789000-m02" ...
I0904 12:54:16.727205    3109 qemu.go:418] Using hvf for hardware acceleration
I0904 12:54:16.727371    3109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:0d:d2:31:6b:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m02/disk.qcow2
I0904 12:54:16.735053    3109 main.go:141] libmachine: STDOUT: 
I0904 12:54:16.735115    3109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0904 12:54:16.735195    3109 fix.go:56] duration metric: took 16.741416ms for fixHost
I0904 12:54:16.735210    3109 start.go:83] releasing machines lock for "ha-789000-m02", held for 16.900209ms
W0904 12:54:16.735342    3109 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-789000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-789000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0904 12:54:16.740112    3109 out.go:201] 
W0904 12:54:16.744210    3109 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0904 12:54:16.744228    3109 out.go:270] * 
* 
W0904 12:54:16.749963    3109 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0904 12:54:16.754089    3109 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-789000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr
E0904 12:54:23.601456    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr: exit status 7 (2m58.118769083s)

                                                
                                                
-- stdout --
	ha-789000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-789000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-789000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-789000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 12:54:16.811179    3113 out.go:345] Setting OutFile to fd 1 ...
	I0904 12:54:16.811385    3113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:54:16.811391    3113 out.go:358] Setting ErrFile to fd 2...
	I0904 12:54:16.811393    3113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:54:16.811549    3113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 12:54:16.811721    3113 out.go:352] Setting JSON to false
	I0904 12:54:16.811741    3113 mustload.go:65] Loading cluster: ha-789000
	I0904 12:54:16.811805    3113 notify.go:220] Checking for updates...
	I0904 12:54:16.812037    3113 config.go:182] Loaded profile config "ha-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 12:54:16.812048    3113 status.go:255] checking status of ha-789000 ...
	I0904 12:54:16.812944    3113 status.go:330] ha-789000 host status = "Running" (err=<nil>)
	I0904 12:54:16.812953    3113 host.go:66] Checking if "ha-789000" exists ...
	I0904 12:54:16.813068    3113 host.go:66] Checking if "ha-789000" exists ...
	I0904 12:54:16.813196    3113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 12:54:16.813204    3113 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/id_rsa Username:docker}
	W0904 12:54:16.813404    3113 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0904 12:54:16.813421    3113 retry.go:31] will retry after 190.846621ms: dial tcp 192.168.105.5:22: connect: host is down
	W0904 12:54:17.006621    3113 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0904 12:54:17.006672    3113 retry.go:31] will retry after 551.99472ms: dial tcp 192.168.105.5:22: connect: host is down
	W0904 12:54:17.561238    3113 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0904 12:54:17.561317    3113 retry.go:31] will retry after 408.134165ms: dial tcp 192.168.105.5:22: connect: host is down
	W0904 12:54:17.971645    3113 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0904 12:54:17.971705    3113 retry.go:31] will retry after 308.503761ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0904 12:54:18.282017    3113 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/id_rsa Username:docker}
	W0904 12:54:18.282291    3113 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0904 12:54:18.282305    3113 retry.go:31] will retry after 129.786698ms: dial tcp 192.168.105.5:22: connect: host is down
	W0904 12:54:18.414292    3113 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0904 12:54:18.414323    3113 retry.go:31] will retry after 529.973088ms: dial tcp 192.168.105.5:22: connect: host is down
	W0904 12:54:44.866717    3113 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0904 12:54:44.866770    3113 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0904 12:54:44.866777    3113 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0904 12:54:44.866781    3113 status.go:257] ha-789000 status: &{Name:ha-789000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0904 12:54:44.866792    3113 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0904 12:54:44.866795    3113 status.go:255] checking status of ha-789000-m02 ...
	I0904 12:54:44.867009    3113 status.go:330] ha-789000-m02 host status = "Stopped" (err=<nil>)
	I0904 12:54:44.867016    3113 status.go:343] host is not running, skipping remaining checks
	I0904 12:54:44.867019    3113 status.go:257] ha-789000-m02 status: &{Name:ha-789000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 12:54:44.867023    3113 status.go:255] checking status of ha-789000-m03 ...
	I0904 12:54:44.867722    3113 status.go:330] ha-789000-m03 host status = "Running" (err=<nil>)
	I0904 12:54:44.867730    3113 host.go:66] Checking if "ha-789000-m03" exists ...
	I0904 12:54:44.867842    3113 host.go:66] Checking if "ha-789000-m03" exists ...
	I0904 12:54:44.867977    3113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 12:54:44.867985    3113 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m03/id_rsa Username:docker}
	W0904 12:55:59.869641    3113 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0904 12:55:59.869711    3113 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0904 12:55:59.869721    3113 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0904 12:55:59.869725    3113 status.go:257] ha-789000-m03 status: &{Name:ha-789000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0904 12:55:59.869735    3113 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0904 12:55:59.869739    3113 status.go:255] checking status of ha-789000-m04 ...
	I0904 12:55:59.870499    3113 status.go:330] ha-789000-m04 host status = "Running" (err=<nil>)
	I0904 12:55:59.870505    3113 host.go:66] Checking if "ha-789000-m04" exists ...
	I0904 12:55:59.870600    3113 host.go:66] Checking if "ha-789000-m04" exists ...
	I0904 12:55:59.870735    3113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 12:55:59.870740    3113 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000-m04/id_rsa Username:docker}
	W0904 12:57:14.870033    3113 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0904 12:57:14.870238    3113 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0904 12:57:14.870282    3113 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0904 12:57:14.870307    3113 status.go:257] ha-789000-m04 status: &{Name:ha-789000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0904 12:57:14.870349    3113 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000: exit status 3 (26.003786083s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0904 12:57:40.875085    3138 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0904 12:57:40.875141    3138 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-789000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (209.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-789000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-789000 -v=7 --alsologtostderr
E0904 12:59:50.886810    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-789000 -v=7 --alsologtostderr: (3m49.008089958s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-789000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-789000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.225880125s)

                                                
                                                
-- stdout --
	* [ha-789000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-789000" primary control-plane node in "ha-789000" cluster
	* Restarting existing qemu2 VM for "ha-789000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-789000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:02:48.898430    3569 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:02:48.898612    3569 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:02:48.898616    3569 out.go:358] Setting ErrFile to fd 2...
	I0904 13:02:48.898619    3569 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:02:48.898793    3569 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:02:48.899964    3569 out.go:352] Setting JSON to false
	I0904 13:02:48.919609    3569 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3732,"bootTime":1725476436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:02:48.919679    3569 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:02:48.924615    3569 out.go:177] * [ha-789000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:02:48.932578    3569 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:02:48.932636    3569 notify.go:220] Checking for updates...
	I0904 13:02:48.940477    3569 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:02:48.943548    3569 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:02:48.946565    3569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:02:48.949442    3569 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:02:48.952529    3569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:02:48.955956    3569 config.go:182] Loaded profile config "ha-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:02:48.956008    3569 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:02:48.960503    3569 out.go:177] * Using the qemu2 driver based on existing profile
	I0904 13:02:48.967479    3569 start.go:297] selected driver: qemu2
	I0904 13:02:48.967486    3569 start.go:901] validating driver "qemu2" against &{Name:ha-789000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-789000 Namespace:default
APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:02:48.967565    3569 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:02:48.970217    3569 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:02:48.970243    3569 cni.go:84] Creating CNI manager for ""
	I0904 13:02:48.970249    3569 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0904 13:02:48.970315    3569 start.go:340] cluster config:
	{Name:ha-789000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-789000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:02:48.974523    3569 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:02:48.983484    3569 out.go:177] * Starting "ha-789000" primary control-plane node in "ha-789000" cluster
	I0904 13:02:48.986378    3569 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:02:48.986391    3569 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:02:48.986399    3569 cache.go:56] Caching tarball of preloaded images
	I0904 13:02:48.986464    3569 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:02:48.986470    3569 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:02:48.986536    3569 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/ha-789000/config.json ...
	I0904 13:02:48.987044    3569 start.go:360] acquireMachinesLock for ha-789000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:02:48.987078    3569 start.go:364] duration metric: took 27.584µs to acquireMachinesLock for "ha-789000"
	I0904 13:02:48.987087    3569 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:02:48.987093    3569 fix.go:54] fixHost starting: 
	I0904 13:02:48.987206    3569 fix.go:112] recreateIfNeeded on ha-789000: state=Stopped err=<nil>
	W0904 13:02:48.987215    3569 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:02:48.991499    3569 out.go:177] * Restarting existing qemu2 VM for "ha-789000" ...
	I0904 13:02:48.999447    3569 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:02:48.999478    3569 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:27:c5:f4:fe:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/disk.qcow2
	I0904 13:02:49.001519    3569 main.go:141] libmachine: STDOUT: 
	I0904 13:02:49.001541    3569 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:02:49.001569    3569 fix.go:56] duration metric: took 14.475542ms for fixHost
	I0904 13:02:49.001574    3569 start.go:83] releasing machines lock for "ha-789000", held for 14.492417ms
	W0904 13:02:49.001580    3569 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:02:49.001609    3569 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:02:49.001613    3569 start.go:729] Will try again in 5 seconds ...
	I0904 13:02:54.003749    3569 start.go:360] acquireMachinesLock for ha-789000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:02:54.004265    3569 start.go:364] duration metric: took 338.208µs to acquireMachinesLock for "ha-789000"
	I0904 13:02:54.004404    3569 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:02:54.004424    3569 fix.go:54] fixHost starting: 
	I0904 13:02:54.005171    3569 fix.go:112] recreateIfNeeded on ha-789000: state=Stopped err=<nil>
	W0904 13:02:54.005203    3569 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:02:54.008815    3569 out.go:177] * Restarting existing qemu2 VM for "ha-789000" ...
	I0904 13:02:54.013600    3569 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:02:54.013935    3569 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:27:c5:f4:fe:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/disk.qcow2
	I0904 13:02:54.023453    3569 main.go:141] libmachine: STDOUT: 
	I0904 13:02:54.023518    3569 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:02:54.023585    3569 fix.go:56] duration metric: took 19.161792ms for fixHost
	I0904 13:02:54.023603    3569 start.go:83] releasing machines lock for "ha-789000", held for 19.314042ms
	W0904 13:02:54.023814    3569 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-789000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-789000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:02:54.030632    3569 out.go:201] 
	W0904 13:02:54.033728    3569 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:02:54.033756    3569 out.go:270] * 
	* 
	W0904 13:02:54.036505    3569 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:02:54.041658    3569 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-789000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-789000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000: exit status 7 (32.100583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-789000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.367709ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-789000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-789000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:02:54.187193    3581 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:02:54.187417    3581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:02:54.187420    3581 out.go:358] Setting ErrFile to fd 2...
	I0904 13:02:54.187422    3581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:02:54.187577    3581 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:02:54.187789    3581 mustload.go:65] Loading cluster: ha-789000
	I0904 13:02:54.188002    3581 config.go:182] Loaded profile config "ha-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0904 13:02:54.188324    3581 out.go:270] ! The control-plane node ha-789000 host is not running (will try others): state=Stopped
	! The control-plane node ha-789000 host is not running (will try others): state=Stopped
	W0904 13:02:54.188443    3581 out.go:270] ! The control-plane node ha-789000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-789000-m02 host is not running (will try others): state=Stopped
	I0904 13:02:54.192599    3581 out.go:177] * The control-plane node ha-789000-m03 host is not running: state=Stopped
	I0904 13:02:54.195490    3581 out.go:177]   To start a cluster, run: "minikube start -p ha-789000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-789000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr: exit status 7 (30.115ms)

                                                
                                                
-- stdout --
	ha-789000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-789000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-789000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-789000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:02:54.227531    3583 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:02:54.227673    3583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:02:54.227677    3583 out.go:358] Setting ErrFile to fd 2...
	I0904 13:02:54.227679    3583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:02:54.227804    3583 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:02:54.227938    3583 out.go:352] Setting JSON to false
	I0904 13:02:54.227948    3583 mustload.go:65] Loading cluster: ha-789000
	I0904 13:02:54.228021    3583 notify.go:220] Checking for updates...
	I0904 13:02:54.228186    3583 config.go:182] Loaded profile config "ha-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:02:54.228194    3583 status.go:255] checking status of ha-789000 ...
	I0904 13:02:54.228397    3583 status.go:330] ha-789000 host status = "Stopped" (err=<nil>)
	I0904 13:02:54.228400    3583 status.go:343] host is not running, skipping remaining checks
	I0904 13:02:54.228403    3583 status.go:257] ha-789000 status: &{Name:ha-789000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 13:02:54.228413    3583 status.go:255] checking status of ha-789000-m02 ...
	I0904 13:02:54.228502    3583 status.go:330] ha-789000-m02 host status = "Stopped" (err=<nil>)
	I0904 13:02:54.228504    3583 status.go:343] host is not running, skipping remaining checks
	I0904 13:02:54.228506    3583 status.go:257] ha-789000-m02 status: &{Name:ha-789000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 13:02:54.228510    3583 status.go:255] checking status of ha-789000-m03 ...
	I0904 13:02:54.228593    3583 status.go:330] ha-789000-m03 host status = "Stopped" (err=<nil>)
	I0904 13:02:54.228596    3583 status.go:343] host is not running, skipping remaining checks
	I0904 13:02:54.228598    3583 status.go:257] ha-789000-m03 status: &{Name:ha-789000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 13:02:54.228601    3583 status.go:255] checking status of ha-789000-m04 ...
	I0904 13:02:54.228695    3583 status.go:330] ha-789000-m04 host status = "Stopped" (err=<nil>)
	I0904 13:02:54.228698    3583 status.go:343] host is not running, skipping remaining checks
	I0904 13:02:54.228700    3583 status.go:257] ha-789000-m04 status: &{Name:ha-789000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000: exit status 7 (30.810791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-789000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-789000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-789000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"Disable
DriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-789000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"
docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"
metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"Mou
ntOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000: exit status 7 (30.25175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 stop -v=7 --alsologtostderr
E0904 13:03:27.791240    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 13:03:55.875905    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
E0904 13:05:18.964985    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-789000 stop -v=7 --alsologtostderr: (3m21.978224875s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr: exit status 7 (66.448917ms)

                                                
                                                
-- stdout --
	ha-789000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-789000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-789000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-789000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:06:16.377563    3640 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:06:16.377775    3640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:06:16.377780    3640 out.go:358] Setting ErrFile to fd 2...
	I0904 13:06:16.377783    3640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:06:16.377947    3640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:06:16.378106    3640 out.go:352] Setting JSON to false
	I0904 13:06:16.378126    3640 mustload.go:65] Loading cluster: ha-789000
	I0904 13:06:16.378166    3640 notify.go:220] Checking for updates...
	I0904 13:06:16.378444    3640 config.go:182] Loaded profile config "ha-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:06:16.378454    3640 status.go:255] checking status of ha-789000 ...
	I0904 13:06:16.378745    3640 status.go:330] ha-789000 host status = "Stopped" (err=<nil>)
	I0904 13:06:16.378750    3640 status.go:343] host is not running, skipping remaining checks
	I0904 13:06:16.378753    3640 status.go:257] ha-789000 status: &{Name:ha-789000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 13:06:16.378767    3640 status.go:255] checking status of ha-789000-m02 ...
	I0904 13:06:16.378892    3640 status.go:330] ha-789000-m02 host status = "Stopped" (err=<nil>)
	I0904 13:06:16.378896    3640 status.go:343] host is not running, skipping remaining checks
	I0904 13:06:16.378899    3640 status.go:257] ha-789000-m02 status: &{Name:ha-789000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 13:06:16.378904    3640 status.go:255] checking status of ha-789000-m03 ...
	I0904 13:06:16.379030    3640 status.go:330] ha-789000-m03 host status = "Stopped" (err=<nil>)
	I0904 13:06:16.379034    3640 status.go:343] host is not running, skipping remaining checks
	I0904 13:06:16.379036    3640 status.go:257] ha-789000-m03 status: &{Name:ha-789000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 13:06:16.379040    3640 status.go:255] checking status of ha-789000-m04 ...
	I0904 13:06:16.379153    3640 status.go:330] ha-789000-m04 host status = "Stopped" (err=<nil>)
	I0904 13:06:16.379157    3640 status.go:343] host is not running, skipping remaining checks
	I0904 13:06:16.379159    3640 status.go:257] ha-789000-m04 status: &{Name:ha-789000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr": ha-789000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-789000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-789000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-789000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr": ha-789000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-789000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-789000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-789000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr": ha-789000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-789000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-789000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-789000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000: exit status 7 (32.810375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-789000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-789000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.182776791s)

                                                
                                                
-- stdout --
	* [ha-789000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-789000" primary control-plane node in "ha-789000" cluster
	* Restarting existing qemu2 VM for "ha-789000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-789000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:06:16.441400    3644 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:06:16.441521    3644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:06:16.441524    3644 out.go:358] Setting ErrFile to fd 2...
	I0904 13:06:16.441526    3644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:06:16.441657    3644 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:06:16.442699    3644 out.go:352] Setting JSON to false
	I0904 13:06:16.459003    3644 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3940,"bootTime":1725476436,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:06:16.459085    3644 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:06:16.463567    3644 out.go:177] * [ha-789000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:06:16.470503    3644 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:06:16.470562    3644 notify.go:220] Checking for updates...
	I0904 13:06:16.476550    3644 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:06:16.479518    3644 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:06:16.482525    3644 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:06:16.485424    3644 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:06:16.488540    3644 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:06:16.491856    3644 config.go:182] Loaded profile config "ha-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:06:16.492110    3644 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:06:16.495426    3644 out.go:177] * Using the qemu2 driver based on existing profile
	I0904 13:06:16.502497    3644 start.go:297] selected driver: qemu2
	I0904 13:06:16.502506    3644 start.go:901] validating driver "qemu2" against &{Name:ha-789000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-789000 Namespace:default
APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gv
isor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:06:16.502591    3644 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:06:16.504960    3644 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:06:16.505006    3644 cni.go:84] Creating CNI manager for ""
	I0904 13:06:16.505011    3644 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0904 13:06:16.505066    3644 start.go:340] cluster config:
	{Name:ha-789000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-789000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:06:16.508548    3644 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:06:16.517465    3644 out.go:177] * Starting "ha-789000" primary control-plane node in "ha-789000" cluster
	I0904 13:06:16.521502    3644 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:06:16.521518    3644 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:06:16.521528    3644 cache.go:56] Caching tarball of preloaded images
	I0904 13:06:16.521590    3644 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:06:16.521596    3644 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:06:16.521677    3644 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/ha-789000/config.json ...
	I0904 13:06:16.522130    3644 start.go:360] acquireMachinesLock for ha-789000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:06:16.522165    3644 start.go:364] duration metric: took 28.625µs to acquireMachinesLock for "ha-789000"
	I0904 13:06:16.522174    3644 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:06:16.522178    3644 fix.go:54] fixHost starting: 
	I0904 13:06:16.522293    3644 fix.go:112] recreateIfNeeded on ha-789000: state=Stopped err=<nil>
	W0904 13:06:16.522302    3644 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:06:16.526553    3644 out.go:177] * Restarting existing qemu2 VM for "ha-789000" ...
	I0904 13:06:16.534505    3644 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:06:16.534537    3644 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:27:c5:f4:fe:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/disk.qcow2
	I0904 13:06:16.536554    3644 main.go:141] libmachine: STDOUT: 
	I0904 13:06:16.536577    3644 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:06:16.536606    3644 fix.go:56] duration metric: took 14.429041ms for fixHost
	I0904 13:06:16.536610    3644 start.go:83] releasing machines lock for "ha-789000", held for 14.441042ms
	W0904 13:06:16.536616    3644 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:06:16.536641    3644 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:06:16.536645    3644 start.go:729] Will try again in 5 seconds ...
	I0904 13:06:21.538779    3644 start.go:360] acquireMachinesLock for ha-789000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:06:21.539225    3644 start.go:364] duration metric: took 319.709µs to acquireMachinesLock for "ha-789000"
	I0904 13:06:21.539348    3644 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:06:21.539366    3644 fix.go:54] fixHost starting: 
	I0904 13:06:21.540079    3644 fix.go:112] recreateIfNeeded on ha-789000: state=Stopped err=<nil>
	W0904 13:06:21.540106    3644 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:06:21.543668    3644 out.go:177] * Restarting existing qemu2 VM for "ha-789000" ...
	I0904 13:06:21.551495    3644 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:06:21.551748    3644 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:27:c5:f4:fe:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/ha-789000/disk.qcow2
	I0904 13:06:21.561002    3644 main.go:141] libmachine: STDOUT: 
	I0904 13:06:21.561064    3644 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:06:21.561130    3644 fix.go:56] duration metric: took 21.762375ms for fixHost
	I0904 13:06:21.561147    3644 start.go:83] releasing machines lock for "ha-789000", held for 21.901708ms
	W0904 13:06:21.561347    3644 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-789000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-789000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:06:21.568448    3644 out.go:201] 
	W0904 13:06:21.572518    3644 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:06:21.572547    3644 out.go:270] * 
	* 
	W0904 13:06:21.574938    3644 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:06:21.583430    3644 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-789000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000: exit status 7 (71.654417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-789000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-789000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-789000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"Disable
DriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-789000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"
docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"
metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"Mou
ntOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000: exit status 7 (29.612125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-789000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-789000 --control-plane -v=7 --alsologtostderr: exit status 83 (43.062709ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-789000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-789000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:06:21.780489    3659 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:06:21.780876    3659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:06:21.780880    3659 out.go:358] Setting ErrFile to fd 2...
	I0904 13:06:21.780883    3659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:06:21.781050    3659 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:06:21.781280    3659 mustload.go:65] Loading cluster: ha-789000
	I0904 13:06:21.781490    3659 config.go:182] Loaded profile config "ha-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0904 13:06:21.781807    3659 out.go:270] ! The control-plane node ha-789000 host is not running (will try others): state=Stopped
	! The control-plane node ha-789000 host is not running (will try others): state=Stopped
	W0904 13:06:21.781907    3659 out.go:270] ! The control-plane node ha-789000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-789000-m02 host is not running (will try others): state=Stopped
	I0904 13:06:21.786243    3659 out.go:177] * The control-plane node ha-789000-m03 host is not running: state=Stopped
	I0904 13:06:21.790195    3659 out.go:177]   To start a cluster, run: "minikube start -p ha-789000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-789000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-789000 -n ha-789000: exit status 7 (31.175917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-591000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-591000 --driver=qemu2 : exit status 80 (9.941881792s)

                                                
                                                
-- stdout --
	* [image-591000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-591000" primary control-plane node in "image-591000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-591000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-591000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-591000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-591000 -n image-591000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-591000 -n image-591000: exit status 7 (67.37275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-591000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-470000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-470000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.869608125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5d1fd64d-151c-4eb8-8539-7114508c514d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-470000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"26a703ee-a1a3-435d-9d6f-4f14a6fb634f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19575"}}
	{"specversion":"1.0","id":"f5de8dad-8109-4c4d-9533-e3655a3fbad6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig"}}
	{"specversion":"1.0","id":"b48bd542-1d5f-4f5c-801b-b8668d29d4b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d104d64b-c69a-4cb7-917e-cde89ba7d28c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1ebc9c13-c5df-4a5d-ae15-71af37a78232","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube"}}
	{"specversion":"1.0","id":"1a5c4bc2-ee73-480b-9e3d-667cc76b2550","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"175a4b75-a3b1-4c77-8e71-53e30a26bfd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5e4dc755-3302-4729-b569-1179a23bcf4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"2c06078b-0bf6-4c48-b924-543683627c5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-470000\" primary control-plane node in \"json-output-470000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"409c7b0e-f7b8-4697-ba5d-f128f56e0ced","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"21247845-deb1-4614-a7db-61a6c7a9ceaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-470000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ce68387-6bdd-4db7-9355-efc4269f514c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"0e9c6aef-1333-42c2-b503-543cfe572c6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"6b7f646c-4319-4742-9b41-1413393c92db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-470000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"82ff6c4a-dbd2-4320-800e-91023c73c6f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"da791493-7432-4b2e-bf40-84a623b59561","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-470000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.87s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-470000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-470000 --output=json --user=testUser: exit status 83 (76.255083ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"515af991-43a0-4a69-8bc7-753a7e57d0c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-470000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"ef0f208a-874f-4545-bb8a-b432b4ecc16f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-470000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-470000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-470000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-470000 --output=json --user=testUser: exit status 83 (45.241917ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-470000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-470000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-470000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-470000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.27s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-706000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-706000 --driver=qemu2 : exit status 80 (9.968997125s)

                                                
                                                
-- stdout --
	* [first-706000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-706000" primary control-plane node in "first-706000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-706000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-706000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-706000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-09-04 13:06:54.818102 -0700 PDT m=+2531.872331501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-707000 -n second-707000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-707000 -n second-707000: exit status 85 (85.566416ms)

                                                
                                                
-- stdout --
	* Profile "second-707000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-707000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-707000" host is not running, skipping log retrieval (state="* Profile \"second-707000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-707000\"")
helpers_test.go:175: Cleaning up "second-707000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-707000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-09-04 13:06:55.006725 -0700 PDT m=+2532.060956959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-706000 -n first-706000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-706000 -n first-706000: exit status 7 (30.489708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-706000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-706000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-706000
--- FAIL: TestMinikubeProfile (10.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-647000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-647000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.82979725s)

                                                
                                                
-- stdout --
	* [mount-start-1-647000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-647000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-647000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-647000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-647000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-647000 -n mount-start-1-647000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-647000 -n mount-start-1-647000: exit status 7 (68.596917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-647000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.90s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-452000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-452000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.800560167s)

                                                
                                                
-- stdout --
	* [multinode-452000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-452000" primary control-plane node in "multinode-452000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-452000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:07:05.225649    3810 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:07:05.225766    3810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:07:05.225768    3810 out.go:358] Setting ErrFile to fd 2...
	I0904 13:07:05.225771    3810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:07:05.225916    3810 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:07:05.227000    3810 out.go:352] Setting JSON to false
	I0904 13:07:05.243031    3810 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3989,"bootTime":1725476436,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:07:05.243098    3810 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:07:05.250625    3810 out.go:177] * [multinode-452000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:07:05.258625    3810 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:07:05.258670    3810 notify.go:220] Checking for updates...
	I0904 13:07:05.265575    3810 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:07:05.268641    3810 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:07:05.271660    3810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:07:05.274657    3810 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:07:05.282715    3810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:07:05.285791    3810 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:07:05.289579    3810 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:07:05.296599    3810 start.go:297] selected driver: qemu2
	I0904 13:07:05.296605    3810 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:07:05.296616    3810 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:07:05.298877    3810 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:07:05.302527    3810 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:07:05.305684    3810 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:07:05.305715    3810 cni.go:84] Creating CNI manager for ""
	I0904 13:07:05.305721    3810 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0904 13:07:05.305725    3810 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 13:07:05.305758    3810 start.go:340] cluster config:
	{Name:multinode-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPa
th:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:07:05.309655    3810 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:07:05.318548    3810 out.go:177] * Starting "multinode-452000" primary control-plane node in "multinode-452000" cluster
	I0904 13:07:05.322605    3810 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:07:05.322620    3810 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:07:05.322628    3810 cache.go:56] Caching tarball of preloaded images
	I0904 13:07:05.322684    3810 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:07:05.322690    3810 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:07:05.322906    3810 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/multinode-452000/config.json ...
	I0904 13:07:05.322919    3810 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/multinode-452000/config.json: {Name:mkf697efc4e95cd16dece08c315f84a934caa31d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:07:05.323540    3810 start.go:360] acquireMachinesLock for multinode-452000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:07:05.323578    3810 start.go:364] duration metric: took 31.791µs to acquireMachinesLock for "multinode-452000"
	I0904 13:07:05.323591    3810 start.go:93] Provisioning new machine with config: &{Name:multinode-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-452000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:07:05.323620    3810 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:07:05.332553    3810 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0904 13:07:05.351413    3810 start.go:159] libmachine.API.Create for "multinode-452000" (driver="qemu2")
	I0904 13:07:05.351439    3810 client.go:168] LocalClient.Create starting
	I0904 13:07:05.351494    3810 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:07:05.351529    3810 main.go:141] libmachine: Decoding PEM data...
	I0904 13:07:05.351538    3810 main.go:141] libmachine: Parsing certificate...
	I0904 13:07:05.351575    3810 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:07:05.351598    3810 main.go:141] libmachine: Decoding PEM data...
	I0904 13:07:05.351607    3810 main.go:141] libmachine: Parsing certificate...
	I0904 13:07:05.352042    3810 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:07:05.499777    3810 main.go:141] libmachine: Creating SSH key...
	I0904 13:07:05.598480    3810 main.go:141] libmachine: Creating Disk image...
	I0904 13:07:05.598485    3810 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:07:05.598685    3810 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/disk.qcow2
	I0904 13:07:05.608001    3810 main.go:141] libmachine: STDOUT: 
	I0904 13:07:05.608024    3810 main.go:141] libmachine: STDERR: 
	I0904 13:07:05.608068    3810 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/disk.qcow2 +20000M
	I0904 13:07:05.615979    3810 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:07:05.615994    3810 main.go:141] libmachine: STDERR: 
	I0904 13:07:05.616015    3810 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/disk.qcow2
	I0904 13:07:05.616020    3810 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:07:05.616039    3810 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:07:05.616063    3810 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:e1:f7:99:bd:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/disk.qcow2
	I0904 13:07:05.617644    3810 main.go:141] libmachine: STDOUT: 
	I0904 13:07:05.617659    3810 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:07:05.617676    3810 client.go:171] duration metric: took 266.236708ms to LocalClient.Create
	I0904 13:07:07.619810    3810 start.go:128] duration metric: took 2.296207875s to createHost
	I0904 13:07:07.619865    3810 start.go:83] releasing machines lock for "multinode-452000", held for 2.296317459s
	W0904 13:07:07.619904    3810 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:07:07.635416    3810 out.go:177] * Deleting "multinode-452000" in qemu2 ...
	W0904 13:07:07.664193    3810 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:07:07.664227    3810 start.go:729] Will try again in 5 seconds ...
	I0904 13:07:12.664941    3810 start.go:360] acquireMachinesLock for multinode-452000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:07:12.665422    3810 start.go:364] duration metric: took 376.75µs to acquireMachinesLock for "multinode-452000"
	I0904 13:07:12.665565    3810 start.go:93] Provisioning new machine with config: &{Name:multinode-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-452000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:07:12.665907    3810 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:07:12.677406    3810 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0904 13:07:12.729331    3810 start.go:159] libmachine.API.Create for "multinode-452000" (driver="qemu2")
	I0904 13:07:12.729382    3810 client.go:168] LocalClient.Create starting
	I0904 13:07:12.729488    3810 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:07:12.729542    3810 main.go:141] libmachine: Decoding PEM data...
	I0904 13:07:12.729556    3810 main.go:141] libmachine: Parsing certificate...
	I0904 13:07:12.729629    3810 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:07:12.729672    3810 main.go:141] libmachine: Decoding PEM data...
	I0904 13:07:12.729692    3810 main.go:141] libmachine: Parsing certificate...
	I0904 13:07:12.730222    3810 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:07:12.891147    3810 main.go:141] libmachine: Creating SSH key...
	I0904 13:07:12.926573    3810 main.go:141] libmachine: Creating Disk image...
	I0904 13:07:12.926581    3810 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:07:12.926792    3810 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/disk.qcow2
	I0904 13:07:12.935913    3810 main.go:141] libmachine: STDOUT: 
	I0904 13:07:12.935932    3810 main.go:141] libmachine: STDERR: 
	I0904 13:07:12.935974    3810 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/disk.qcow2 +20000M
	I0904 13:07:12.943937    3810 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:07:12.943951    3810 main.go:141] libmachine: STDERR: 
	I0904 13:07:12.943961    3810 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/disk.qcow2
	I0904 13:07:12.943966    3810 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:07:12.943975    3810 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:07:12.944018    3810 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:a5:f4:80:86:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/disk.qcow2
	I0904 13:07:12.945604    3810 main.go:141] libmachine: STDOUT: 
	I0904 13:07:12.945620    3810 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:07:12.945630    3810 client.go:171] duration metric: took 216.248042ms to LocalClient.Create
	I0904 13:07:14.947769    3810 start.go:128] duration metric: took 2.281872667s to createHost
	I0904 13:07:14.947871    3810 start.go:83] releasing machines lock for "multinode-452000", held for 2.282434s
	W0904 13:07:14.948161    3810 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:07:14.962821    3810 out.go:201] 
	W0904 13:07:14.966018    3810 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:07:14.966042    3810 out.go:270] * 
	* 
	W0904 13:07:14.968734    3810 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:07:14.983648    3810 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-452000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (66.894833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (77.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (130.384125ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-452000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- rollout status deployment/busybox: exit status 1 (58.136417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.820541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.894375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.78675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.482916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.745417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.53175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.687292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.465333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.48225ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0904 13:08:27.784594    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.703209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.259958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.227042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.159625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.325541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (30.171709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (77.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.037166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (30.526208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-452000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-452000 -v 3 --alsologtostderr: exit status 83 (42.648792ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-452000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-452000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:08:32.661296    3900 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:08:32.661457    3900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:32.661460    3900 out.go:358] Setting ErrFile to fd 2...
	I0904 13:08:32.661462    3900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:32.661586    3900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:08:32.661798    3900 mustload.go:65] Loading cluster: multinode-452000
	I0904 13:08:32.661978    3900 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:08:32.666970    3900 out.go:177] * The control-plane node multinode-452000 host is not running: state=Stopped
	I0904 13:08:32.670980    3900 out.go:177]   To start a cluster, run: "minikube start -p multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-452000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (30.470416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-452000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-452000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (29.735209ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-452000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-452000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-452000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (29.827917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-452000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-452000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-452000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,
\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-452000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"C
ontrolPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\
"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (30.446667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 status --output json --alsologtostderr: exit status 7 (29.899375ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-452000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:08:32.871980    3912 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:08:32.872110    3912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:32.872113    3912 out.go:358] Setting ErrFile to fd 2...
	I0904 13:08:32.872116    3912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:32.872234    3912 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:08:32.872343    3912 out.go:352] Setting JSON to true
	I0904 13:08:32.872353    3912 mustload.go:65] Loading cluster: multinode-452000
	I0904 13:08:32.872411    3912 notify.go:220] Checking for updates...
	I0904 13:08:32.872550    3912 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:08:32.872562    3912 status.go:255] checking status of multinode-452000 ...
	I0904 13:08:32.872779    3912 status.go:330] multinode-452000 host status = "Stopped" (err=<nil>)
	I0904 13:08:32.872783    3912 status.go:343] host is not running, skipping remaining checks
	I0904 13:08:32.872785    3912 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-452000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (30.009542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 node stop m03: exit status 85 (46.502167ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-452000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 status: exit status 7 (29.354166ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 status --alsologtostderr: exit status 7 (29.71425ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:08:33.008427    3920 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:08:33.008580    3920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:33.008583    3920 out.go:358] Setting ErrFile to fd 2...
	I0904 13:08:33.008585    3920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:33.008712    3920 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:08:33.008825    3920 out.go:352] Setting JSON to false
	I0904 13:08:33.008835    3920 mustload.go:65] Loading cluster: multinode-452000
	I0904 13:08:33.008892    3920 notify.go:220] Checking for updates...
	I0904 13:08:33.009029    3920 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:08:33.009036    3920 status.go:255] checking status of multinode-452000 ...
	I0904 13:08:33.009276    3920 status.go:330] multinode-452000 host status = "Stopped" (err=<nil>)
	I0904 13:08:33.009280    3920 status.go:343] host is not running, skipping remaining checks
	I0904 13:08:33.009282    3920 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-452000 status --alsologtostderr": multinode-452000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (29.552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (44.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.761166ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:08:33.069061    3924 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:08:33.069293    3924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:33.069296    3924 out.go:358] Setting ErrFile to fd 2...
	I0904 13:08:33.069299    3924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:33.069438    3924 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:08:33.069654    3924 mustload.go:65] Loading cluster: multinode-452000
	I0904 13:08:33.069859    3924 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:08:33.073944    3924 out.go:201] 
	W0904 13:08:33.077009    3924 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0904 13:08:33.077014    3924 out.go:270] * 
	* 
	W0904 13:08:33.078706    3924 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:08:33.081999    3924 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0904 13:08:33.069061    3924 out.go:345] Setting OutFile to fd 1 ...
I0904 13:08:33.069293    3924 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 13:08:33.069296    3924 out.go:358] Setting ErrFile to fd 2...
I0904 13:08:33.069299    3924 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 13:08:33.069438    3924 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
I0904 13:08:33.069654    3924 mustload.go:65] Loading cluster: multinode-452000
I0904 13:08:33.069859    3924 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 13:08:33.073944    3924 out.go:201] 
W0904 13:08:33.077009    3924 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0904 13:08:33.077014    3924 out.go:270] * 
* 
W0904 13:08:33.078706    3924 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0904 13:08:33.081999    3924 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-452000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (30.560958ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:08:33.115742    3926 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:08:33.115896    3926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:33.115899    3926 out.go:358] Setting ErrFile to fd 2...
	I0904 13:08:33.115901    3926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:33.116030    3926 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:08:33.116145    3926 out.go:352] Setting JSON to false
	I0904 13:08:33.116156    3926 mustload.go:65] Loading cluster: multinode-452000
	I0904 13:08:33.116211    3926 notify.go:220] Checking for updates...
	I0904 13:08:33.116375    3926 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:08:33.116383    3926 status.go:255] checking status of multinode-452000 ...
	I0904 13:08:33.116601    3926 status.go:330] multinode-452000 host status = "Stopped" (err=<nil>)
	I0904 13:08:33.116605    3926 status.go:343] host is not running, skipping remaining checks
	I0904 13:08:33.116609    3926 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (74.407333ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:08:34.136757    3928 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:08:34.136934    3928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:34.136938    3928 out.go:358] Setting ErrFile to fd 2...
	I0904 13:08:34.136941    3928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:34.137109    3928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:08:34.137268    3928 out.go:352] Setting JSON to false
	I0904 13:08:34.137280    3928 mustload.go:65] Loading cluster: multinode-452000
	I0904 13:08:34.137317    3928 notify.go:220] Checking for updates...
	I0904 13:08:34.137540    3928 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:08:34.137550    3928 status.go:255] checking status of multinode-452000 ...
	I0904 13:08:34.137825    3928 status.go:330] multinode-452000 host status = "Stopped" (err=<nil>)
	I0904 13:08:34.137830    3928 status.go:343] host is not running, skipping remaining checks
	I0904 13:08:34.137833    3928 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (73.232625ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:08:35.182916    3930 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:08:35.183101    3930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:35.183105    3930 out.go:358] Setting ErrFile to fd 2...
	I0904 13:08:35.183108    3930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:35.183296    3930 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:08:35.183468    3930 out.go:352] Setting JSON to false
	I0904 13:08:35.183482    3930 mustload.go:65] Loading cluster: multinode-452000
	I0904 13:08:35.183526    3930 notify.go:220] Checking for updates...
	I0904 13:08:35.183738    3930 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:08:35.183754    3930 status.go:255] checking status of multinode-452000 ...
	I0904 13:08:35.184047    3930 status.go:330] multinode-452000 host status = "Stopped" (err=<nil>)
	I0904 13:08:35.184052    3930 status.go:343] host is not running, skipping remaining checks
	I0904 13:08:35.184055    3930 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (75.081625ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:08:37.972385    3932 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:08:37.972602    3932 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:37.972611    3932 out.go:358] Setting ErrFile to fd 2...
	I0904 13:08:37.972615    3932 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:37.972810    3932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:08:37.972970    3932 out.go:352] Setting JSON to false
	I0904 13:08:37.972984    3932 mustload.go:65] Loading cluster: multinode-452000
	I0904 13:08:37.973025    3932 notify.go:220] Checking for updates...
	I0904 13:08:37.973246    3932 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:08:37.973256    3932 status.go:255] checking status of multinode-452000 ...
	I0904 13:08:37.973517    3932 status.go:330] multinode-452000 host status = "Stopped" (err=<nil>)
	I0904 13:08:37.973522    3932 status.go:343] host is not running, skipping remaining checks
	I0904 13:08:37.973525    3932 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (72.427292ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:08:42.702127    3934 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:08:42.702319    3934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:42.702324    3934 out.go:358] Setting ErrFile to fd 2...
	I0904 13:08:42.702328    3934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:42.702495    3934 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:08:42.702644    3934 out.go:352] Setting JSON to false
	I0904 13:08:42.702657    3934 mustload.go:65] Loading cluster: multinode-452000
	I0904 13:08:42.702691    3934 notify.go:220] Checking for updates...
	I0904 13:08:42.702917    3934 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:08:42.702927    3934 status.go:255] checking status of multinode-452000 ...
	I0904 13:08:42.703203    3934 status.go:330] multinode-452000 host status = "Stopped" (err=<nil>)
	I0904 13:08:42.703209    3934 status.go:343] host is not running, skipping remaining checks
	I0904 13:08:42.703212    3934 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (70.791292ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:08:49.177608    3941 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:08:49.177822    3941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:49.177827    3941 out.go:358] Setting ErrFile to fd 2...
	I0904 13:08:49.177831    3941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:49.178012    3941 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:08:49.178194    3941 out.go:352] Setting JSON to false
	I0904 13:08:49.178209    3941 mustload.go:65] Loading cluster: multinode-452000
	I0904 13:08:49.178255    3941 notify.go:220] Checking for updates...
	I0904 13:08:49.178512    3941 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:08:49.178523    3941 status.go:255] checking status of multinode-452000 ...
	I0904 13:08:49.178844    3941 status.go:330] multinode-452000 host status = "Stopped" (err=<nil>)
	I0904 13:08:49.178850    3941 status.go:343] host is not running, skipping remaining checks
	I0904 13:08:49.178853    3941 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0904 13:08:55.870622    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (74.018041ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:08:57.483323    3946 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:08:57.483570    3946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:57.483574    3946 out.go:358] Setting ErrFile to fd 2...
	I0904 13:08:57.483578    3946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:08:57.483729    3946 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:08:57.483896    3946 out.go:352] Setting JSON to false
	I0904 13:08:57.483912    3946 mustload.go:65] Loading cluster: multinode-452000
	I0904 13:08:57.483950    3946 notify.go:220] Checking for updates...
	I0904 13:08:57.484194    3946 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:08:57.484205    3946 status.go:255] checking status of multinode-452000 ...
	I0904 13:08:57.484509    3946 status.go:330] multinode-452000 host status = "Stopped" (err=<nil>)
	I0904 13:08:57.484515    3946 status.go:343] host is not running, skipping remaining checks
	I0904 13:08:57.484518    3946 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (73.37125ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:09:06.465669    3948 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:09:06.465853    3948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:09:06.465857    3948 out.go:358] Setting ErrFile to fd 2...
	I0904 13:09:06.465861    3948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:09:06.466053    3948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:09:06.466199    3948 out.go:352] Setting JSON to false
	I0904 13:09:06.466213    3948 mustload.go:65] Loading cluster: multinode-452000
	I0904 13:09:06.466256    3948 notify.go:220] Checking for updates...
	I0904 13:09:06.466501    3948 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:09:06.466513    3948 status.go:255] checking status of multinode-452000 ...
	I0904 13:09:06.466834    3948 status.go:330] multinode-452000 host status = "Stopped" (err=<nil>)
	I0904 13:09:06.466839    3948 status.go:343] host is not running, skipping remaining checks
	I0904 13:09:06.466842    3948 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (71.120375ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:09:17.714844    3952 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:09:17.715033    3952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:09:17.715037    3952 out.go:358] Setting ErrFile to fd 2...
	I0904 13:09:17.715040    3952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:09:17.715198    3952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:09:17.715374    3952 out.go:352] Setting JSON to false
	I0904 13:09:17.715387    3952 mustload.go:65] Loading cluster: multinode-452000
	I0904 13:09:17.715426    3952 notify.go:220] Checking for updates...
	I0904 13:09:17.715671    3952 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:09:17.715683    3952 status.go:255] checking status of multinode-452000 ...
	I0904 13:09:17.715964    3952 status.go:330] multinode-452000 host status = "Stopped" (err=<nil>)
	I0904 13:09:17.715970    3952 status.go:343] host is not running, skipping remaining checks
	I0904 13:09:17.715973    3952 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-452000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (34.271ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (44.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-452000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-452000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-452000: (3.520676917s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-452000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-452000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.22219425s)

                                                
                                                
-- stdout --
	* [multinode-452000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-452000" primary control-plane node in "multinode-452000" cluster
	* Restarting existing qemu2 VM for "multinode-452000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-452000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:09:21.363719    3979 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:09:21.363867    3979 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:09:21.363871    3979 out.go:358] Setting ErrFile to fd 2...
	I0904 13:09:21.363874    3979 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:09:21.364020    3979 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:09:21.365228    3979 out.go:352] Setting JSON to false
	I0904 13:09:21.383578    3979 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4125,"bootTime":1725476436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:09:21.383646    3979 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:09:21.388237    3979 out.go:177] * [multinode-452000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:09:21.395018    3979 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:09:21.395053    3979 notify.go:220] Checking for updates...
	I0904 13:09:21.402209    3979 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:09:21.403573    3979 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:09:21.407185    3979 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:09:21.410179    3979 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:09:21.413227    3979 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:09:21.416460    3979 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:09:21.416518    3979 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:09:21.421149    3979 out.go:177] * Using the qemu2 driver based on existing profile
	I0904 13:09:21.428148    3979 start.go:297] selected driver: qemu2
	I0904 13:09:21.428155    3979 start.go:901] validating driver "qemu2" against &{Name:multinode-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-452000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:09:21.428218    3979 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:09:21.430487    3979 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:09:21.430531    3979 cni.go:84] Creating CNI manager for ""
	I0904 13:09:21.430541    3979 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0904 13:09:21.430587    3979 start.go:340] cluster config:
	{Name:multinode-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:09:21.434087    3979 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:09:21.443069    3979 out.go:177] * Starting "multinode-452000" primary control-plane node in "multinode-452000" cluster
	I0904 13:09:21.447126    3979 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:09:21.447151    3979 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:09:21.447157    3979 cache.go:56] Caching tarball of preloaded images
	I0904 13:09:21.447227    3979 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:09:21.447233    3979 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:09:21.447297    3979 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/multinode-452000/config.json ...
	I0904 13:09:21.447759    3979 start.go:360] acquireMachinesLock for multinode-452000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:09:21.447798    3979 start.go:364] duration metric: took 32.042µs to acquireMachinesLock for "multinode-452000"
	I0904 13:09:21.447809    3979 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:09:21.447815    3979 fix.go:54] fixHost starting: 
	I0904 13:09:21.447950    3979 fix.go:112] recreateIfNeeded on multinode-452000: state=Stopped err=<nil>
	W0904 13:09:21.447958    3979 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:09:21.452176    3979 out.go:177] * Restarting existing qemu2 VM for "multinode-452000" ...
	I0904 13:09:21.460232    3979 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:09:21.460318    3979 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:a5:f4:80:86:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/disk.qcow2
	I0904 13:09:21.462605    3979 main.go:141] libmachine: STDOUT: 
	I0904 13:09:21.462634    3979 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:09:21.462660    3979 fix.go:56] duration metric: took 14.844584ms for fixHost
	I0904 13:09:21.462699    3979 start.go:83] releasing machines lock for "multinode-452000", held for 14.896375ms
	W0904 13:09:21.462705    3979 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:09:21.462737    3979 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:09:21.462742    3979 start.go:729] Will try again in 5 seconds ...
	I0904 13:09:26.464779    3979 start.go:360] acquireMachinesLock for multinode-452000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:09:26.465317    3979 start.go:364] duration metric: took 417.375µs to acquireMachinesLock for "multinode-452000"
	I0904 13:09:26.465467    3979 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:09:26.465487    3979 fix.go:54] fixHost starting: 
	I0904 13:09:26.466323    3979 fix.go:112] recreateIfNeeded on multinode-452000: state=Stopped err=<nil>
	W0904 13:09:26.466351    3979 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:09:26.470807    3979 out.go:177] * Restarting existing qemu2 VM for "multinode-452000" ...
	I0904 13:09:26.478747    3979 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:09:26.478954    3979 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:a5:f4:80:86:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/disk.qcow2
	I0904 13:09:26.487813    3979 main.go:141] libmachine: STDOUT: 
	I0904 13:09:26.487900    3979 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:09:26.487991    3979 fix.go:56] duration metric: took 22.504917ms for fixHost
	I0904 13:09:26.488017    3979 start.go:83] releasing machines lock for "multinode-452000", held for 22.652542ms
	W0904 13:09:26.488193    3979 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-452000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-452000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:09:26.496790    3979 out.go:201] 
	W0904 13:09:26.500942    3979 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:09:26.500973    3979 out.go:270] * 
	* 
	W0904 13:09:26.503449    3979 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:09:26.510714    3979 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-452000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-452000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (32.714708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 node delete m03: exit status 83 (38.814125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-452000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-452000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-452000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 status --alsologtostderr: exit status 7 (30.659541ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:09:26.693024    3993 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:09:26.693179    3993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:09:26.693183    3993 out.go:358] Setting ErrFile to fd 2...
	I0904 13:09:26.693185    3993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:09:26.693332    3993 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:09:26.693459    3993 out.go:352] Setting JSON to false
	I0904 13:09:26.693469    3993 mustload.go:65] Loading cluster: multinode-452000
	I0904 13:09:26.693525    3993 notify.go:220] Checking for updates...
	I0904 13:09:26.693669    3993 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:09:26.693677    3993 status.go:255] checking status of multinode-452000 ...
	I0904 13:09:26.693896    3993 status.go:330] multinode-452000 host status = "Stopped" (err=<nil>)
	I0904 13:09:26.693899    3993 status.go:343] host is not running, skipping remaining checks
	I0904 13:09:26.693902    3993 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-452000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (30.014416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-452000 stop: (3.728299209s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 status: exit status 7 (69.583041ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-452000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-452000 status --alsologtostderr: exit status 7 (33.230167ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:09:30.554746    4023 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:09:30.554917    4023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:09:30.554920    4023 out.go:358] Setting ErrFile to fd 2...
	I0904 13:09:30.554922    4023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:09:30.555041    4023 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:09:30.555171    4023 out.go:352] Setting JSON to false
	I0904 13:09:30.555181    4023 mustload.go:65] Loading cluster: multinode-452000
	I0904 13:09:30.555255    4023 notify.go:220] Checking for updates...
	I0904 13:09:30.555411    4023 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:09:30.555418    4023 status.go:255] checking status of multinode-452000 ...
	I0904 13:09:30.555627    4023 status.go:330] multinode-452000 host status = "Stopped" (err=<nil>)
	I0904 13:09:30.555630    4023 status.go:343] host is not running, skipping remaining checks
	I0904 13:09:30.555633    4023 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-452000 status --alsologtostderr": multinode-452000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-452000 status --alsologtostderr": multinode-452000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (29.905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-452000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-452000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.179706625s)

                                                
                                                
-- stdout --
	* [multinode-452000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-452000" primary control-plane node in "multinode-452000" cluster
	* Restarting existing qemu2 VM for "multinode-452000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-452000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:09:30.615001    4027 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:09:30.615138    4027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:09:30.615141    4027 out.go:358] Setting ErrFile to fd 2...
	I0904 13:09:30.615144    4027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:09:30.615298    4027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:09:30.616327    4027 out.go:352] Setting JSON to false
	I0904 13:09:30.632686    4027 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4134,"bootTime":1725476436,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:09:30.632781    4027 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:09:30.637914    4027 out.go:177] * [multinode-452000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:09:30.645101    4027 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:09:30.645152    4027 notify.go:220] Checking for updates...
	I0904 13:09:30.651044    4027 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:09:30.654080    4027 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:09:30.655448    4027 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:09:30.658072    4027 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:09:30.661101    4027 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:09:30.664329    4027 config.go:182] Loaded profile config "multinode-452000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:09:30.664583    4027 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:09:30.669032    4027 out.go:177] * Using the qemu2 driver based on existing profile
	I0904 13:09:30.676021    4027 start.go:297] selected driver: qemu2
	I0904 13:09:30.676027    4027 start.go:901] validating driver "qemu2" against &{Name:multinode-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-452000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:09:30.676084    4027 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:09:30.678297    4027 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:09:30.678340    4027 cni.go:84] Creating CNI manager for ""
	I0904 13:09:30.678344    4027 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0904 13:09:30.678390    4027 start.go:340] cluster config:
	{Name:multinode-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:09:30.681731    4027 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:09:30.690064    4027 out.go:177] * Starting "multinode-452000" primary control-plane node in "multinode-452000" cluster
	I0904 13:09:30.694043    4027 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:09:30.694057    4027 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:09:30.694065    4027 cache.go:56] Caching tarball of preloaded images
	I0904 13:09:30.694122    4027 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:09:30.694128    4027 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:09:30.694179    4027 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/multinode-452000/config.json ...
	I0904 13:09:30.694612    4027 start.go:360] acquireMachinesLock for multinode-452000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:09:30.694647    4027 start.go:364] duration metric: took 28.708µs to acquireMachinesLock for "multinode-452000"
	I0904 13:09:30.694656    4027 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:09:30.694660    4027 fix.go:54] fixHost starting: 
	I0904 13:09:30.694781    4027 fix.go:112] recreateIfNeeded on multinode-452000: state=Stopped err=<nil>
	W0904 13:09:30.694788    4027 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:09:30.699053    4027 out.go:177] * Restarting existing qemu2 VM for "multinode-452000" ...
	I0904 13:09:30.707080    4027 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:09:30.707120    4027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:a5:f4:80:86:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/disk.qcow2
	I0904 13:09:30.709011    4027 main.go:141] libmachine: STDOUT: 
	I0904 13:09:30.709030    4027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:09:30.709055    4027 fix.go:56] duration metric: took 14.394792ms for fixHost
	I0904 13:09:30.709059    4027 start.go:83] releasing machines lock for "multinode-452000", held for 14.408417ms
	W0904 13:09:30.709066    4027 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:09:30.709102    4027 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:09:30.709106    4027 start.go:729] Will try again in 5 seconds ...
	I0904 13:09:35.711258    4027 start.go:360] acquireMachinesLock for multinode-452000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:09:35.711758    4027 start.go:364] duration metric: took 349.041µs to acquireMachinesLock for "multinode-452000"
	I0904 13:09:35.711893    4027 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:09:35.711914    4027 fix.go:54] fixHost starting: 
	I0904 13:09:35.712617    4027 fix.go:112] recreateIfNeeded on multinode-452000: state=Stopped err=<nil>
	W0904 13:09:35.712645    4027 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:09:35.717197    4027 out.go:177] * Restarting existing qemu2 VM for "multinode-452000" ...
	I0904 13:09:35.721123    4027 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:09:35.721450    4027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:a5:f4:80:86:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/multinode-452000/disk.qcow2
	I0904 13:09:35.730540    4027 main.go:141] libmachine: STDOUT: 
	I0904 13:09:35.730601    4027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:09:35.730670    4027 fix.go:56] duration metric: took 18.76125ms for fixHost
	I0904 13:09:35.730685    4027 start.go:83] releasing machines lock for "multinode-452000", held for 18.907458ms
	W0904 13:09:35.730845    4027 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-452000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-452000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:09:35.739147    4027 out.go:201] 
	W0904 13:09:35.742048    4027 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:09:35.742073    4027 out.go:270] * 
	* 
	W0904 13:09:35.744945    4027 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:09:35.753059    4027 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-452000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (71.215959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-452000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-452000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-452000-m01 --driver=qemu2 : exit status 80 (10.022136833s)

                                                
                                                
-- stdout --
	* [multinode-452000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-452000-m01" primary control-plane node in "multinode-452000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-452000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-452000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-452000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-452000-m02 --driver=qemu2 : exit status 80 (9.865187209s)

                                                
                                                
-- stdout --
	* [multinode-452000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-452000-m02" primary control-plane node in "multinode-452000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-452000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-452000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-452000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-452000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-452000: exit status 83 (80.902542ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-452000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-452000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-452000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (29.609666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.12s)

                                                
                                    
x
+
TestPreload (10.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-229000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-229000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.898321166s)

                                                
                                                
-- stdout --
	* [test-preload-229000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-229000" primary control-plane node in "test-preload-229000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-229000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:09:56.093361    4085 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:09:56.093486    4085 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:09:56.093489    4085 out.go:358] Setting ErrFile to fd 2...
	I0904 13:09:56.093491    4085 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:09:56.093604    4085 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:09:56.094634    4085 out.go:352] Setting JSON to false
	I0904 13:09:56.110730    4085 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4160,"bootTime":1725476436,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:09:56.110794    4085 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:09:56.117251    4085 out.go:177] * [test-preload-229000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:09:56.125238    4085 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:09:56.125286    4085 notify.go:220] Checking for updates...
	I0904 13:09:56.133140    4085 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:09:56.136249    4085 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:09:56.139249    4085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:09:56.140813    4085 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:09:56.144240    4085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:09:56.147630    4085 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:09:56.147679    4085 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:09:56.152074    4085 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:09:56.159176    4085 start.go:297] selected driver: qemu2
	I0904 13:09:56.159183    4085 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:09:56.159189    4085 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:09:56.161393    4085 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:09:56.164259    4085 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:09:56.167279    4085 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:09:56.167295    4085 cni.go:84] Creating CNI manager for ""
	I0904 13:09:56.167302    4085 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:09:56.167308    4085 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 13:09:56.167337    4085 start.go:340] cluster config:
	{Name:test-preload-229000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-229000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Socket
VMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:09:56.170869    4085 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:09:56.179128    4085 out.go:177] * Starting "test-preload-229000" primary control-plane node in "test-preload-229000" cluster
	I0904 13:09:56.183190    4085 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0904 13:09:56.183261    4085 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/test-preload-229000/config.json ...
	I0904 13:09:56.183280    4085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/test-preload-229000/config.json: {Name:mked6d3a50dec7fa06a7a8cfc58e76a723c17d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:09:56.183274    4085 cache.go:107] acquiring lock: {Name:mk45b67adc7e8663e20155223515d901dc129adc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:09:56.183286    4085 cache.go:107] acquiring lock: {Name:mk0f553be9cdd3ab457b5687abf5e71d2ac6903a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:09:56.183280    4085 cache.go:107] acquiring lock: {Name:mkb4ae0d9bf7d773421685237d7c14055316f39d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:09:56.183269    4085 cache.go:107] acquiring lock: {Name:mkd1fa8a10c4c3e5d814e251a967a29368832fc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:09:56.183441    4085 cache.go:107] acquiring lock: {Name:mk89a6ae62f3b26a6e29c3aadde9c452017b3d0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:09:56.183471    4085 cache.go:107] acquiring lock: {Name:mk3789d45d0a1eeb5998ccb7309eb38cd37518aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:09:56.183487    4085 cache.go:107] acquiring lock: {Name:mk1d80060359a247e3599b276b90f8d2fb539cc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:09:56.183540    4085 cache.go:107] acquiring lock: {Name:mkf441dbb0709753efe3192c73ecb9ed3bb239ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:09:56.183593    4085 start.go:360] acquireMachinesLock for test-preload-229000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:09:56.183626    4085 start.go:364] duration metric: took 26.917µs to acquireMachinesLock for "test-preload-229000"
	I0904 13:09:56.183681    4085 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0904 13:09:56.183698    4085 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0904 13:09:56.183639    4085 start.go:93] Provisioning new machine with config: &{Name:test-preload-229000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-2
29000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:09:56.183737    4085 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0904 13:09:56.183747    4085 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:09:56.183681    4085 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0904 13:09:56.183777    4085 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0904 13:09:56.183794    4085 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0904 13:09:56.183749    4085 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:09:56.188488    4085 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:09:56.192161    4085 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0904 13:09:56.197041    4085 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0904 13:09:56.197070    4085 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0904 13:09:56.197292    4085 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0904 13:09:56.200748    4085 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0904 13:09:56.200769    4085 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:09:56.200773    4085 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0904 13:09:56.200805    4085 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0904 13:09:56.200883    4085 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:09:56.210655    4085 start.go:159] libmachine.API.Create for "test-preload-229000" (driver="qemu2")
	I0904 13:09:56.210675    4085 client.go:168] LocalClient.Create starting
	I0904 13:09:56.210752    4085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:09:56.210783    4085 main.go:141] libmachine: Decoding PEM data...
	I0904 13:09:56.210793    4085 main.go:141] libmachine: Parsing certificate...
	I0904 13:09:56.210830    4085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:09:56.210854    4085 main.go:141] libmachine: Decoding PEM data...
	I0904 13:09:56.210863    4085 main.go:141] libmachine: Parsing certificate...
	I0904 13:09:56.211186    4085 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:09:56.360926    4085 main.go:141] libmachine: Creating SSH key...
	I0904 13:09:56.530550    4085 main.go:141] libmachine: Creating Disk image...
	I0904 13:09:56.530566    4085 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:09:56.530795    4085 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/disk.qcow2
	I0904 13:09:56.540575    4085 main.go:141] libmachine: STDOUT: 
	I0904 13:09:56.540594    4085 main.go:141] libmachine: STDERR: 
	I0904 13:09:56.540636    4085 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/disk.qcow2 +20000M
	I0904 13:09:56.549698    4085 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:09:56.549720    4085 main.go:141] libmachine: STDERR: 
	I0904 13:09:56.549737    4085 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/disk.qcow2
	I0904 13:09:56.549741    4085 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:09:56.549753    4085 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:09:56.549776    4085 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:3e:50:2f:a0:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/disk.qcow2
	I0904 13:09:56.551705    4085 main.go:141] libmachine: STDOUT: 
	I0904 13:09:56.551723    4085 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:09:56.551739    4085 client.go:171] duration metric: took 341.065625ms to LocalClient.Create
	I0904 13:09:56.625183    4085 cache.go:162] opening:  /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0904 13:09:56.638967    4085 cache.go:162] opening:  /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0904 13:09:56.662599    4085 cache.go:162] opening:  /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0904 13:09:56.674486    4085 cache.go:162] opening:  /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0904 13:09:56.699886    4085 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0904 13:09:56.699910    4085 cache.go:162] opening:  /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0904 13:09:56.714949    4085 cache.go:162] opening:  /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0904 13:09:56.737362    4085 cache.go:162] opening:  /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0904 13:09:56.836864    4085 cache.go:157] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0904 13:09:56.836897    4085 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 653.5385ms
	I0904 13:09:56.836939    4085 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0904 13:09:57.374582    4085 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0904 13:09:57.374730    4085 cache.go:162] opening:  /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0904 13:09:57.807213    4085 cache.go:157] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0904 13:09:57.807325    4085 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.624033875s
	I0904 13:09:57.807365    4085 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0904 13:09:58.047634    4085 cache.go:157] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0904 13:09:58.047686    4085 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 1.864344958s
	I0904 13:09:58.047735    4085 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0904 13:09:58.551952    4085 start.go:128] duration metric: took 2.368212083s to createHost
	I0904 13:09:58.552040    4085 start.go:83] releasing machines lock for "test-preload-229000", held for 2.368440917s
	W0904 13:09:58.552093    4085 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:09:58.564412    4085 out.go:177] * Deleting "test-preload-229000" in qemu2 ...
	W0904 13:09:58.595106    4085 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:09:58.595132    4085 start.go:729] Will try again in 5 seconds ...
	I0904 13:09:59.768930    4085 cache.go:157] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0904 13:09:59.768984    4085 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.585748834s
	I0904 13:09:59.769011    4085 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0904 13:10:00.198657    4085 cache.go:157] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0904 13:10:00.198698    4085 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.015494167s
	I0904 13:10:00.198721    4085 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0904 13:10:00.900234    4085 cache.go:157] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0904 13:10:00.900283    4085 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.717086583s
	I0904 13:10:00.900310    4085 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0904 13:10:02.039602    4085 cache.go:157] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0904 13:10:02.039653    4085 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.856297709s
	I0904 13:10:02.039698    4085 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0904 13:10:03.595707    4085 start.go:360] acquireMachinesLock for test-preload-229000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:10:03.596163    4085 start.go:364] duration metric: took 374.334µs to acquireMachinesLock for "test-preload-229000"
	I0904 13:10:03.596289    4085 start.go:93] Provisioning new machine with config: &{Name:test-preload-229000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-2
29000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:10:03.596513    4085 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:10:03.605717    4085 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0904 13:10:03.657339    4085 start.go:159] libmachine.API.Create for "test-preload-229000" (driver="qemu2")
	I0904 13:10:03.657385    4085 client.go:168] LocalClient.Create starting
	I0904 13:10:03.657518    4085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:10:03.657584    4085 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:03.657608    4085 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:03.657676    4085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:10:03.657720    4085 main.go:141] libmachine: Decoding PEM data...
	I0904 13:10:03.657733    4085 main.go:141] libmachine: Parsing certificate...
	I0904 13:10:03.658226    4085 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:10:03.817746    4085 main.go:141] libmachine: Creating SSH key...
	I0904 13:10:03.889263    4085 main.go:141] libmachine: Creating Disk image...
	I0904 13:10:03.889269    4085 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:10:03.889480    4085 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/disk.qcow2
	I0904 13:10:03.899329    4085 main.go:141] libmachine: STDOUT: 
	I0904 13:10:03.899348    4085 main.go:141] libmachine: STDERR: 
	I0904 13:10:03.899411    4085 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/disk.qcow2 +20000M
	I0904 13:10:03.907583    4085 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:10:03.907600    4085 main.go:141] libmachine: STDERR: 
	I0904 13:10:03.907618    4085 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/disk.qcow2
	I0904 13:10:03.907621    4085 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:10:03.907634    4085 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:10:03.907664    4085 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:fe:d9:c7:b4:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/test-preload-229000/disk.qcow2
	I0904 13:10:03.909455    4085 main.go:141] libmachine: STDOUT: 
	I0904 13:10:03.909476    4085 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:10:03.909488    4085 client.go:171] duration metric: took 252.100917ms to LocalClient.Create
	I0904 13:10:05.146283    4085 cache.go:157] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0904 13:10:05.146343    4085 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.963024459s
	I0904 13:10:05.146368    4085 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0904 13:10:05.146466    4085 cache.go:87] Successfully saved all images to host disk.
	I0904 13:10:05.911676    4085 start.go:128] duration metric: took 2.31516475s to createHost
	I0904 13:10:05.911737    4085 start.go:83] releasing machines lock for "test-preload-229000", held for 2.315585959s
	W0904 13:10:05.912102    4085 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-229000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-229000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:10:05.927652    4085 out.go:201] 
	W0904 13:10:05.931741    4085 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:10:05.931774    4085 out.go:270] * 
	* 
	W0904 13:10:05.934459    4085 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:10:05.949548    4085 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-229000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-09-04 13:10:05.96673 -0700 PDT m=+2723.024238209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-229000 -n test-preload-229000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-229000 -n test-preload-229000: exit status 7 (68.670166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-229000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-229000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-229000
--- FAIL: TestPreload (10.05s)

                                                
                                    
x
+
TestScheduledStopUnix (9.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-508000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-508000 --memory=2048 --driver=qemu2 : exit status 80 (9.767606875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-508000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-508000" primary control-plane node in "scheduled-stop-508000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-508000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-508000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-508000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-508000" primary control-plane node in "scheduled-stop-508000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-508000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-508000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-09-04 13:10:15.882701 -0700 PDT m=+2732.940378417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-508000 -n scheduled-stop-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-508000 -n scheduled-stop-508000: exit status 7 (68.354209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-508000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-508000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-508000
--- FAIL: TestScheduledStopUnix (9.92s)

                                                
                                    
x
+
TestSkaffold (12.77s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3029147618 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3029147618 version: (1.068440917s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-047000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-047000 --memory=2600 --driver=qemu2 : exit status 80 (9.972208s)

                                                
                                                
-- stdout --
	* [skaffold-047000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-047000" primary control-plane node in "skaffold-047000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-047000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-047000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-047000" primary control-plane node in "skaffold-047000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-047000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-09-04 13:10:28.653324 -0700 PDT m=+2745.711220501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-047000 -n skaffold-047000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-047000 -n skaffold-047000: exit status 7 (62.86675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-047000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-047000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-047000
--- FAIL: TestSkaffold (12.77s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (591.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2668396958 start -p running-upgrade-478000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2668396958 start -p running-upgrade-478000 --memory=2200 --vm-driver=qemu2 : (54.665944833s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-478000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0904 13:13:27.780994    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 13:13:55.865588    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-478000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.383341334s)

                                                
                                                
-- stdout --
	* [running-upgrade-478000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-478000" primary control-plane node in "running-upgrade-478000" cluster
	* Updating the running qemu2 "running-upgrade-478000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:12:05.751357    4490 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:12:05.751490    4490 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:12:05.751493    4490 out.go:358] Setting ErrFile to fd 2...
	I0904 13:12:05.751496    4490 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:12:05.751641    4490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:12:05.752906    4490 out.go:352] Setting JSON to false
	I0904 13:12:05.769609    4490 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4289,"bootTime":1725476436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:12:05.769680    4490 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:12:05.776674    4490 out.go:177] * [running-upgrade-478000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:12:05.783766    4490 notify.go:220] Checking for updates...
	I0904 13:12:05.788631    4490 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:12:05.791687    4490 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:12:05.794615    4490 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:12:05.797603    4490 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:12:05.800682    4490 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:12:05.803629    4490 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:12:05.806954    4490 config.go:182] Loaded profile config "running-upgrade-478000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:12:05.810655    4490 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0904 13:12:05.813586    4490 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:12:05.817603    4490 out.go:177] * Using the qemu2 driver based on existing profile
	I0904 13:12:05.824641    4490 start.go:297] selected driver: qemu2
	I0904 13:12:05.824648    4490 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-478000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50315 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-478000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0904 13:12:05.824691    4490 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:12:05.827040    4490 cni.go:84] Creating CNI manager for ""
	I0904 13:12:05.827060    4490 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:12:05.827083    4490 start.go:340] cluster config:
	{Name:running-upgrade-478000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50315 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-478000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0904 13:12:05.827130    4490 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:12:05.834525    4490 out.go:177] * Starting "running-upgrade-478000" primary control-plane node in "running-upgrade-478000" cluster
	I0904 13:12:05.838650    4490 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0904 13:12:05.838662    4490 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0904 13:12:05.838669    4490 cache.go:56] Caching tarball of preloaded images
	I0904 13:12:05.838710    4490 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:12:05.838714    4490 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0904 13:12:05.838764    4490 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/config.json ...
	I0904 13:12:05.839279    4490 start.go:360] acquireMachinesLock for running-upgrade-478000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:12:05.839305    4490 start.go:364] duration metric: took 20.459µs to acquireMachinesLock for "running-upgrade-478000"
	I0904 13:12:05.839314    4490 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:12:05.839318    4490 fix.go:54] fixHost starting: 
	I0904 13:12:05.839945    4490 fix.go:112] recreateIfNeeded on running-upgrade-478000: state=Running err=<nil>
	W0904 13:12:05.839955    4490 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:12:05.847631    4490 out.go:177] * Updating the running qemu2 "running-upgrade-478000" VM ...
	I0904 13:12:05.851663    4490 machine.go:93] provisionDockerMachine start ...
	I0904 13:12:05.851693    4490 main.go:141] libmachine: Using SSH client type: native
	I0904 13:12:05.851800    4490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10339c5a0] 0x10339ee00 <nil>  [] 0s} localhost 50283 <nil> <nil>}
	I0904 13:12:05.851805    4490 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 13:12:05.907121    4490 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-478000
	
	I0904 13:12:05.907133    4490 buildroot.go:166] provisioning hostname "running-upgrade-478000"
	I0904 13:12:05.907179    4490 main.go:141] libmachine: Using SSH client type: native
	I0904 13:12:05.907296    4490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10339c5a0] 0x10339ee00 <nil>  [] 0s} localhost 50283 <nil> <nil>}
	I0904 13:12:05.907302    4490 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-478000 && echo "running-upgrade-478000" | sudo tee /etc/hostname
	I0904 13:12:05.965615    4490 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-478000
	
	I0904 13:12:05.965664    4490 main.go:141] libmachine: Using SSH client type: native
	I0904 13:12:05.965777    4490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10339c5a0] 0x10339ee00 <nil>  [] 0s} localhost 50283 <nil> <nil>}
	I0904 13:12:05.965786    4490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-478000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-478000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-478000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 13:12:06.023634    4490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 13:12:06.023647    4490 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19575-1140/.minikube CaCertPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19575-1140/.minikube}
	I0904 13:12:06.023656    4490 buildroot.go:174] setting up certificates
	I0904 13:12:06.023660    4490 provision.go:84] configureAuth start
	I0904 13:12:06.023664    4490 provision.go:143] copyHostCerts
	I0904 13:12:06.023727    4490 exec_runner.go:144] found /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.pem, removing ...
	I0904 13:12:06.023732    4490 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.pem
	I0904 13:12:06.023859    4490 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.pem (1078 bytes)
	I0904 13:12:06.024036    4490 exec_runner.go:144] found /Users/jenkins/minikube-integration/19575-1140/.minikube/cert.pem, removing ...
	I0904 13:12:06.024039    4490 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19575-1140/.minikube/cert.pem
	I0904 13:12:06.024083    4490 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19575-1140/.minikube/cert.pem (1123 bytes)
	I0904 13:12:06.024182    4490 exec_runner.go:144] found /Users/jenkins/minikube-integration/19575-1140/.minikube/key.pem, removing ...
	I0904 13:12:06.024185    4490 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19575-1140/.minikube/key.pem
	I0904 13:12:06.024228    4490 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19575-1140/.minikube/key.pem (1675 bytes)
	I0904 13:12:06.024312    4490 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-478000 san=[127.0.0.1 localhost minikube running-upgrade-478000]
	I0904 13:12:06.127313    4490 provision.go:177] copyRemoteCerts
	I0904 13:12:06.127357    4490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 13:12:06.127365    4490 sshutil.go:53] new ssh client: &{IP:localhost Port:50283 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/running-upgrade-478000/id_rsa Username:docker}
	I0904 13:12:06.159239    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 13:12:06.166307    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0904 13:12:06.173019    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 13:12:06.180514    4490 provision.go:87] duration metric: took 156.845041ms to configureAuth
	I0904 13:12:06.180524    4490 buildroot.go:189] setting minikube options for container-runtime
	I0904 13:12:06.180646    4490 config.go:182] Loaded profile config "running-upgrade-478000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:12:06.180681    4490 main.go:141] libmachine: Using SSH client type: native
	I0904 13:12:06.180764    4490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10339c5a0] 0x10339ee00 <nil>  [] 0s} localhost 50283 <nil> <nil>}
	I0904 13:12:06.180768    4490 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0904 13:12:06.236255    4490 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0904 13:12:06.236264    4490 buildroot.go:70] root file system type: tmpfs
	I0904 13:12:06.236317    4490 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0904 13:12:06.236376    4490 main.go:141] libmachine: Using SSH client type: native
	I0904 13:12:06.236493    4490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10339c5a0] 0x10339ee00 <nil>  [] 0s} localhost 50283 <nil> <nil>}
	I0904 13:12:06.236525    4490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0904 13:12:06.293688    4490 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0904 13:12:06.293749    4490 main.go:141] libmachine: Using SSH client type: native
	I0904 13:12:06.293865    4490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10339c5a0] 0x10339ee00 <nil>  [] 0s} localhost 50283 <nil> <nil>}
	I0904 13:12:06.293873    4490 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0904 13:12:06.351749    4490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 13:12:06.351760    4490 machine.go:96] duration metric: took 500.10025ms to provisionDockerMachine
	I0904 13:12:06.351766    4490 start.go:293] postStartSetup for "running-upgrade-478000" (driver="qemu2")
	I0904 13:12:06.351772    4490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 13:12:06.351818    4490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 13:12:06.351831    4490 sshutil.go:53] new ssh client: &{IP:localhost Port:50283 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/running-upgrade-478000/id_rsa Username:docker}
	I0904 13:12:06.381527    4490 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 13:12:06.382915    4490 info.go:137] Remote host: Buildroot 2021.02.12
	I0904 13:12:06.382923    4490 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19575-1140/.minikube/addons for local assets ...
	I0904 13:12:06.382991    4490 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19575-1140/.minikube/files for local assets ...
	I0904 13:12:06.383081    4490 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19575-1140/.minikube/files/etc/ssl/certs/16612.pem -> 16612.pem in /etc/ssl/certs
	I0904 13:12:06.383169    4490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 13:12:06.385912    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/files/etc/ssl/certs/16612.pem --> /etc/ssl/certs/16612.pem (1708 bytes)
	I0904 13:12:06.392572    4490 start.go:296] duration metric: took 40.802125ms for postStartSetup
	I0904 13:12:06.392586    4490 fix.go:56] duration metric: took 553.277708ms for fixHost
	I0904 13:12:06.392618    4490 main.go:141] libmachine: Using SSH client type: native
	I0904 13:12:06.392726    4490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10339c5a0] 0x10339ee00 <nil>  [] 0s} localhost 50283 <nil> <nil>}
	I0904 13:12:06.392731    4490 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 13:12:06.449387    4490 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725480726.739938929
	
	I0904 13:12:06.449395    4490 fix.go:216] guest clock: 1725480726.739938929
	I0904 13:12:06.449399    4490 fix.go:229] Guest: 2024-09-04 13:12:06.739938929 -0700 PDT Remote: 2024-09-04 13:12:06.392587 -0700 PDT m=+0.660629834 (delta=347.351929ms)
	I0904 13:12:06.449409    4490 fix.go:200] guest clock delta is within tolerance: 347.351929ms
	I0904 13:12:06.449412    4490 start.go:83] releasing machines lock for "running-upgrade-478000", held for 610.113167ms
	I0904 13:12:06.449475    4490 ssh_runner.go:195] Run: cat /version.json
	I0904 13:12:06.449485    4490 sshutil.go:53] new ssh client: &{IP:localhost Port:50283 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/running-upgrade-478000/id_rsa Username:docker}
	I0904 13:12:06.449476    4490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 13:12:06.449515    4490 sshutil.go:53] new ssh client: &{IP:localhost Port:50283 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/running-upgrade-478000/id_rsa Username:docker}
	W0904 13:12:06.450067    4490 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50283: connect: connection refused
	I0904 13:12:06.450091    4490 retry.go:31] will retry after 273.358724ms: dial tcp [::1]:50283: connect: connection refused
	W0904 13:12:06.479259    4490 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0904 13:12:06.479309    4490 ssh_runner.go:195] Run: systemctl --version
	I0904 13:12:06.481155    4490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 13:12:06.482885    4490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 13:12:06.482909    4490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0904 13:12:06.485573    4490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0904 13:12:06.489770    4490 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0904 13:12:06.489780    4490 start.go:495] detecting cgroup driver to use...
	I0904 13:12:06.489846    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 13:12:06.495237    4490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0904 13:12:06.498565    4490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0904 13:12:06.501704    4490 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0904 13:12:06.501733    4490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0904 13:12:06.504713    4490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 13:12:06.508080    4490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0904 13:12:06.511649    4490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 13:12:06.515092    4490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 13:12:06.518037    4490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0904 13:12:06.521082    4490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0904 13:12:06.524306    4490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0904 13:12:06.527206    4490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 13:12:06.530079    4490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 13:12:06.532840    4490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:12:06.615668    4490 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0904 13:12:06.622438    4490 start.go:495] detecting cgroup driver to use...
	I0904 13:12:06.622511    4490 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0904 13:12:06.631079    4490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 13:12:06.635757    4490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 13:12:06.641738    4490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 13:12:06.646631    4490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 13:12:06.650896    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 13:12:06.656167    4490 ssh_runner.go:195] Run: which cri-dockerd
	I0904 13:12:06.657582    4490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0904 13:12:06.661772    4490 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0904 13:12:06.666721    4490 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0904 13:12:06.758350    4490 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0904 13:12:06.861622    4490 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0904 13:12:06.861674    4490 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0904 13:12:06.867966    4490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:12:06.960068    4490 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0904 13:12:08.537221    4490 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.577158083s)
	I0904 13:12:08.537278    4490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0904 13:12:08.541651    4490 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0904 13:12:08.547891    4490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 13:12:08.552956    4490 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0904 13:12:08.613930    4490 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0904 13:12:08.694189    4490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:12:08.761877    4490 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0904 13:12:08.767835    4490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 13:12:08.772131    4490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:12:08.863034    4490 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0904 13:12:08.901105    4490 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0904 13:12:08.901175    4490 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0904 13:12:08.903296    4490 start.go:563] Will wait 60s for crictl version
	I0904 13:12:08.903351    4490 ssh_runner.go:195] Run: which crictl
	I0904 13:12:08.904678    4490 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 13:12:08.916396    4490 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0904 13:12:08.916491    4490 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 13:12:08.928584    4490 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 13:12:08.949468    4490 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0904 13:12:08.949542    4490 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0904 13:12:08.950891    4490 kubeadm.go:883] updating cluster {Name:running-upgrade-478000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50315 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-478000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0904 13:12:08.950931    4490 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0904 13:12:08.950972    4490 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0904 13:12:08.961386    4490 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0904 13:12:08.961399    4490 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0904 13:12:08.961450    4490 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0904 13:12:08.964569    4490 ssh_runner.go:195] Run: which lz4
	I0904 13:12:08.965901    4490 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0904 13:12:08.967051    4490 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0904 13:12:08.967062    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0904 13:12:09.824019    4490 docker.go:649] duration metric: took 858.159833ms to copy over tarball
	I0904 13:12:09.824076    4490 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0904 13:12:11.026695    4490 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.2026265s)
	I0904 13:12:11.026709    4490 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0904 13:12:11.042127    4490 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0904 13:12:11.044979    4490 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0904 13:12:11.050090    4490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:12:11.127176    4490 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0904 13:12:12.351288    4490 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.224117208s)
	I0904 13:12:12.351403    4490 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0904 13:12:12.370572    4490 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0904 13:12:12.370581    4490 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0904 13:12:12.370587    4490 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0904 13:12:12.375144    4490 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:12:12.376993    4490 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0904 13:12:12.379016    4490 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:12:12.379680    4490 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0904 13:12:12.381644    4490 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0904 13:12:12.381757    4490 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0904 13:12:12.383729    4490 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0904 13:12:12.383794    4490 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0904 13:12:12.385384    4490 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0904 13:12:12.385415    4490 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0904 13:12:12.386945    4490 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0904 13:12:12.387043    4490 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0904 13:12:12.387754    4490 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0904 13:12:12.387798    4490 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:12:12.388588    4490 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0904 13:12:12.389327    4490 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:12:12.762185    4490 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0904 13:12:12.775833    4490 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0904 13:12:12.775870    4490 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0904 13:12:12.775927    4490 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0904 13:12:12.786283    4490 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0904 13:12:12.799890    4490 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0904 13:12:12.810474    4490 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0904 13:12:12.810496    4490 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0904 13:12:12.810548    4490 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0904 13:12:12.812373    4490 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0904 13:12:12.827377    4490 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0904 13:12:12.827940    4490 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0904 13:12:12.827957    4490 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0904 13:12:12.827998    4490 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0904 13:12:12.830714    4490 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0904 13:12:12.849919    4490 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0904 13:12:12.850413    4490 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0904 13:12:12.850430    4490 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0904 13:12:12.850477    4490 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0904 13:12:12.851168    4490 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0904 13:12:12.863606    4490 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0904 13:12:12.865947    4490 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0904 13:12:12.865969    4490 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0904 13:12:12.866020    4490 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0904 13:12:12.868286    4490 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0904 13:12:12.877862    4490 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0904 13:12:12.877985    4490 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0904 13:12:12.883875    4490 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0904 13:12:12.883901    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0904 13:12:12.883941    4490 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0904 13:12:12.883959    4490 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0904 13:12:12.883997    4490 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0904 13:12:12.892136    4490 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0904 13:12:12.892149    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0904 13:12:12.896211    4490 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0904 13:12:12.896320    4490 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	W0904 13:12:12.920146    4490 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0904 13:12:12.920276    4490 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:12:12.922149    4490 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0904 13:12:12.922177    4490 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0904 13:12:12.922193    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0904 13:12:12.955675    4490 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0904 13:12:12.955705    4490 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:12:12.955765    4490 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:12:12.985004    4490 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0904 13:12:12.985124    4490 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0904 13:12:12.999409    4490 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0904 13:12:12.999439    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0904 13:12:13.098682    4490 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0904 13:12:13.098697    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0904 13:12:13.151559    4490 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0904 13:12:13.151688    4490 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:12:13.182004    4490 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0904 13:12:13.206216    4490 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0904 13:12:13.206240    4490 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:12:13.206292    4490 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:12:13.243975    4490 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0904 13:12:13.243990    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0904 13:12:14.056620    4490 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0904 13:12:14.056645    4490 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0904 13:12:14.057127    4490 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0904 13:12:14.062643    4490 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0904 13:12:14.062688    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0904 13:12:14.125568    4490 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0904 13:12:14.125590    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0904 13:12:14.364682    4490 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0904 13:12:14.364721    4490 cache_images.go:92] duration metric: took 1.9941615s to LoadCachedImages
	W0904 13:12:14.364758    4490 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0904 13:12:14.364763    4490 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0904 13:12:14.364820    4490 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-478000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-478000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 13:12:14.364885    4490 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0904 13:12:14.379333    4490 cni.go:84] Creating CNI manager for ""
	I0904 13:12:14.379343    4490 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:12:14.379349    4490 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 13:12:14.379361    4490 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-478000 NodeName:running-upgrade-478000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 13:12:14.379423    4490 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-478000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 13:12:14.379473    4490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0904 13:12:14.382442    4490 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 13:12:14.382463    4490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 13:12:14.385000    4490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0904 13:12:14.389904    4490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 13:12:14.394817    4490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0904 13:12:14.401351    4490 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0904 13:12:14.402919    4490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:12:14.473740    4490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 13:12:14.478370    4490 certs.go:68] Setting up /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000 for IP: 10.0.2.15
	I0904 13:12:14.478375    4490 certs.go:194] generating shared ca certs ...
	I0904 13:12:14.478384    4490 certs.go:226] acquiring lock for ca certs: {Name:mkd62cc1bdffb2500ac7e662aba46cadabbc6839 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:12:14.478539    4490 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.key
	I0904 13:12:14.478576    4490 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/proxy-client-ca.key
	I0904 13:12:14.478585    4490 certs.go:256] generating profile certs ...
	I0904 13:12:14.478651    4490 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/client.key
	I0904 13:12:14.478669    4490 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/apiserver.key.d077780c
	I0904 13:12:14.478683    4490 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/apiserver.crt.d077780c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0904 13:12:14.578926    4490 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/apiserver.crt.d077780c ...
	I0904 13:12:14.578932    4490 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/apiserver.crt.d077780c: {Name:mkda008523ffc6246b6e3c6e4c8b501899b21853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:12:14.579212    4490 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/apiserver.key.d077780c ...
	I0904 13:12:14.579217    4490 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/apiserver.key.d077780c: {Name:mkcb95412587fe3b7f9c55da0697c1d4a8c6abad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:12:14.579353    4490 certs.go:381] copying /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/apiserver.crt.d077780c -> /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/apiserver.crt
	I0904 13:12:14.579484    4490 certs.go:385] copying /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/apiserver.key.d077780c -> /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/apiserver.key
	I0904 13:12:14.579601    4490 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/proxy-client.key
	I0904 13:12:14.579718    4490 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/1661.pem (1338 bytes)
	W0904 13:12:14.579742    4490 certs.go:480] ignoring /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/1661_empty.pem, impossibly tiny 0 bytes
	I0904 13:12:14.579748    4490 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 13:12:14.579767    4490 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem (1078 bytes)
	I0904 13:12:14.579788    4490 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem (1123 bytes)
	I0904 13:12:14.579808    4490 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/key.pem (1675 bytes)
	I0904 13:12:14.579850    4490 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/files/etc/ssl/certs/16612.pem (1708 bytes)
	I0904 13:12:14.580185    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 13:12:14.587613    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 13:12:14.594834    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 13:12:14.601569    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 13:12:14.608350    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0904 13:12:14.615494    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0904 13:12:14.623539    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 13:12:14.669631    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0904 13:12:14.694225    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/1661.pem --> /usr/share/ca-certificates/1661.pem (1338 bytes)
	I0904 13:12:14.704606    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/files/etc/ssl/certs/16612.pem --> /usr/share/ca-certificates/16612.pem (1708 bytes)
	I0904 13:12:14.724079    4490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 13:12:14.732424    4490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 13:12:14.737972    4490 ssh_runner.go:195] Run: openssl version
	I0904 13:12:14.739959    4490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1661.pem && ln -fs /usr/share/ca-certificates/1661.pem /etc/ssl/certs/1661.pem"
	I0904 13:12:14.746062    4490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1661.pem
	I0904 13:12:14.748137    4490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 19:41 /usr/share/ca-certificates/1661.pem
	I0904 13:12:14.748185    4490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1661.pem
	I0904 13:12:14.755504    4490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1661.pem /etc/ssl/certs/51391683.0"
	I0904 13:12:14.768230    4490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16612.pem && ln -fs /usr/share/ca-certificates/16612.pem /etc/ssl/certs/16612.pem"
	I0904 13:12:14.771362    4490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16612.pem
	I0904 13:12:14.773477    4490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 19:41 /usr/share/ca-certificates/16612.pem
	I0904 13:12:14.773510    4490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16612.pem
	I0904 13:12:14.789856    4490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16612.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 13:12:14.798625    4490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 13:12:14.815082    4490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 13:12:14.818216    4490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0904 13:12:14.818252    4490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 13:12:14.824796    4490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 13:12:14.837168    4490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 13:12:14.841678    4490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 13:12:14.843843    4490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 13:12:14.846867    4490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 13:12:14.848844    4490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 13:12:14.851799    4490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 13:12:14.855869    4490 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 13:12:14.862307    4490 kubeadm.go:392] StartCluster: {Name:running-upgrade-478000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50315 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-478000 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0904 13:12:14.862379    4490 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0904 13:12:14.925000    4490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 13:12:14.942079    4490 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0904 13:12:14.942088    4490 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0904 13:12:14.942142    4490 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0904 13:12:14.948628    4490 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0904 13:12:14.948862    4490 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-478000" does not appear in /Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:12:14.948912    4490 kubeconfig.go:62] /Users/jenkins/minikube-integration/19575-1140/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-478000" cluster setting kubeconfig missing "running-upgrade-478000" context setting]
	I0904 13:12:14.949055    4490 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/kubeconfig: {Name:mk2a8055a803f1d023c814308503721b85f2130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:12:14.949483    4490 kapi.go:59] client config for running-upgrade-478000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/client.key", CAFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104957f80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 13:12:14.949823    4490 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0904 13:12:14.954958    4490 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-478000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0904 13:12:14.954965    4490 kubeadm.go:1160] stopping kube-system containers ...
	I0904 13:12:14.955022    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0904 13:12:14.980053    4490 docker.go:483] Stopping containers: [9446d7ab7b80 d9db85719842 7b4624ed8253 f43574b020a0 0b66c95e4c12 41ae0abe438b 5f0e353f1b8b f4ce1a153e52 9e68fa5bdb53 52422a30d604 5dc6389287d2 c9f714723bf8 93ef5d800cdd 57c82cc4f6c9 f02908764441 77107c8458dd 3fe30c321c0e 83e7edcf5230 c2df6686f06e]
	I0904 13:12:14.980128    4490 ssh_runner.go:195] Run: docker stop 9446d7ab7b80 d9db85719842 7b4624ed8253 f43574b020a0 0b66c95e4c12 41ae0abe438b 5f0e353f1b8b f4ce1a153e52 9e68fa5bdb53 52422a30d604 5dc6389287d2 c9f714723bf8 93ef5d800cdd 57c82cc4f6c9 f02908764441 77107c8458dd 3fe30c321c0e 83e7edcf5230 c2df6686f06e
	I0904 13:12:15.234468    4490 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0904 13:12:15.285861    4490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 13:12:15.289487    4490 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep  4 20:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Sep  4 20:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep  4 20:12 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep  4 20:11 /etc/kubernetes/scheduler.conf
	
	I0904 13:12:15.289519    4490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/admin.conf
	I0904 13:12:15.292357    4490 kubeadm.go:163] "https://control-plane.minikube.internal:50315" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0904 13:12:15.292388    4490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 13:12:15.295435    4490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/kubelet.conf
	I0904 13:12:15.298180    4490 kubeadm.go:163] "https://control-plane.minikube.internal:50315" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0904 13:12:15.298200    4490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 13:12:15.300829    4490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/controller-manager.conf
	I0904 13:12:15.304049    4490 kubeadm.go:163] "https://control-plane.minikube.internal:50315" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0904 13:12:15.304075    4490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 13:12:15.307289    4490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/scheduler.conf
	I0904 13:12:15.310230    4490 kubeadm.go:163] "https://control-plane.minikube.internal:50315" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0904 13:12:15.310254    4490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 13:12:15.312755    4490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 13:12:15.315916    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 13:12:15.343450    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 13:12:15.704776    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0904 13:12:15.903674    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 13:12:15.923657    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0904 13:12:15.953113    4490 api_server.go:52] waiting for apiserver process to appear ...
	I0904 13:12:15.953189    4490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:12:16.455588    4490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:12:16.955516    4490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:12:17.453259    4490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:12:17.955284    4490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:12:17.959588    4490 api_server.go:72] duration metric: took 2.006511875s to wait for apiserver process to appear ...
	I0904 13:12:17.959596    4490 api_server.go:88] waiting for apiserver healthz status ...
	I0904 13:12:17.959605    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:12:22.960620    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:12:22.960643    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:12:27.961542    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:12:27.961580    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:12:32.962169    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:12:32.962237    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:12:37.963191    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:12:37.963274    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:12:42.964794    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:12:42.964870    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:12:47.966653    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:12:47.966731    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:12:52.968733    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:12:52.968808    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:12:57.971515    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:12:57.971596    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:13:02.974106    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:13:02.974169    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:13:07.976497    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:13:07.976576    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:13:12.978497    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:13:12.978578    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:13:17.979600    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:13:17.980127    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:13:18.018831    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:13:18.018977    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:13:18.041821    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:13:18.041925    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:13:18.056702    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:13:18.056779    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:13:18.069022    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:13:18.069100    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:13:18.080617    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:13:18.080688    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:13:18.090877    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:13:18.090951    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:13:18.101564    4490 logs.go:276] 0 containers: []
	W0904 13:13:18.101577    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:13:18.101640    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:13:18.112211    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:13:18.112226    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:13:18.112231    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:13:18.127915    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:13:18.127928    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:13:18.143099    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:13:18.143111    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:13:18.154538    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:13:18.154547    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:13:18.168148    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:13:18.168161    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:13:18.179454    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:13:18.179467    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:13:18.190529    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:13:18.190541    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:13:18.194894    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:13:18.194901    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:13:18.211884    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:13:18.211895    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:13:18.222872    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:13:18.222885    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:13:18.248259    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:13:18.248266    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:13:18.260598    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:13:18.260612    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:13:18.301251    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:13:18.301262    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:13:18.314768    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:13:18.314781    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:13:18.325728    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:13:18.325739    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:13:18.337994    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:13:18.338007    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:13:18.409563    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:13:18.409578    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:13:20.923006    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:13:25.925702    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:13:25.926150    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:13:25.965964    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:13:25.966105    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:13:25.987697    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:13:25.987815    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:13:26.004294    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:13:26.004366    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:13:26.016555    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:13:26.016624    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:13:26.027898    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:13:26.027966    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:13:26.038549    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:13:26.038619    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:13:26.048669    4490 logs.go:276] 0 containers: []
	W0904 13:13:26.048679    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:13:26.048728    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:13:26.059348    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:13:26.059365    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:13:26.059385    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:13:26.070498    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:13:26.070508    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:13:26.082616    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:13:26.082628    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:13:26.093796    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:13:26.093809    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:13:26.111620    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:13:26.111635    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:13:26.124113    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:13:26.124126    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:13:26.135392    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:13:26.135405    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:13:26.149053    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:13:26.149063    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:13:26.185602    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:13:26.185615    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:13:26.197631    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:13:26.197645    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:13:26.209179    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:13:26.209190    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:13:26.220647    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:13:26.220658    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:13:26.232117    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:13:26.232128    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:13:26.243966    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:13:26.243980    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:13:26.286174    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:13:26.286185    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:13:26.304041    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:13:26.304054    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:13:26.331418    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:13:26.331430    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:13:28.837921    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:13:33.840271    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:13:33.840661    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:13:33.880183    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:13:33.880313    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:13:33.902928    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:13:33.903046    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:13:33.920310    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:13:33.920373    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:13:33.932993    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:13:33.933079    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:13:33.943948    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:13:33.944008    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:13:33.954357    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:13:33.954430    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:13:33.964871    4490 logs.go:276] 0 containers: []
	W0904 13:13:33.964885    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:13:33.964946    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:13:33.983455    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:13:33.983472    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:13:33.983477    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:13:34.027410    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:13:34.027421    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:13:34.039054    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:13:34.039068    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:13:34.050559    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:13:34.050572    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:13:34.062596    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:13:34.062606    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:13:34.066936    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:13:34.066943    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:13:34.078114    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:13:34.078124    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:13:34.097177    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:13:34.097193    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:13:34.110264    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:13:34.110275    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:13:34.121777    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:13:34.121787    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:13:34.156058    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:13:34.156070    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:13:34.169855    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:13:34.169869    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:13:34.182137    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:13:34.182147    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:13:34.199107    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:13:34.199119    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:13:34.212029    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:13:34.212040    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:13:34.239413    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:13:34.239420    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:13:34.257303    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:13:34.257314    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:13:36.770651    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:13:41.773045    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:13:41.773270    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:13:41.803735    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:13:41.803842    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:13:41.818349    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:13:41.818431    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:13:41.830105    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:13:41.830172    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:13:41.841192    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:13:41.841272    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:13:41.851444    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:13:41.851503    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:13:41.861348    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:13:41.861421    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:13:41.871533    4490 logs.go:276] 0 containers: []
	W0904 13:13:41.871546    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:13:41.871604    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:13:41.886090    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:13:41.886109    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:13:41.886114    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:13:41.890294    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:13:41.890302    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:13:41.924362    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:13:41.924374    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:13:41.935801    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:13:41.935810    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:13:41.947382    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:13:41.947394    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:13:41.958481    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:13:41.958489    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:13:41.975815    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:13:41.975827    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:13:41.987222    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:13:41.987232    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:13:41.999849    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:13:41.999860    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:13:42.040882    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:13:42.040892    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:13:42.052321    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:13:42.052333    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:13:42.063471    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:13:42.063483    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:13:42.074662    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:13:42.074673    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:13:42.101446    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:13:42.101457    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:13:42.113058    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:13:42.113071    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:13:42.127343    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:13:42.127352    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:13:42.138659    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:13:42.138670    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:13:44.653950    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:13:49.656680    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:13:49.657140    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:13:49.698612    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:13:49.698735    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:13:49.721334    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:13:49.721437    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:13:49.737526    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:13:49.737602    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:13:49.752386    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:13:49.752458    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:13:49.763586    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:13:49.763649    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:13:49.778446    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:13:49.778512    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:13:49.789849    4490 logs.go:276] 0 containers: []
	W0904 13:13:49.789862    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:13:49.789915    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:13:49.800644    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:13:49.800665    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:13:49.800670    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:13:49.812570    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:13:49.812581    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:13:49.824482    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:13:49.824493    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:13:49.842466    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:13:49.842475    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:13:49.858223    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:13:49.858234    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:13:49.870051    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:13:49.870061    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:13:49.884915    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:13:49.884927    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:13:49.911671    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:13:49.911678    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:13:49.952683    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:13:49.952694    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:13:49.957003    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:13:49.957012    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:13:49.968577    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:13:49.968594    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:13:49.984345    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:13:49.984355    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:13:49.996290    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:13:49.996302    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:13:50.032416    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:13:50.032429    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:13:50.046430    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:13:50.046443    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:13:50.057379    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:13:50.057393    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:13:50.068924    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:13:50.068936    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:13:52.582899    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:13:57.585528    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:13:57.585709    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:13:57.597130    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:13:57.597211    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:13:57.612032    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:13:57.612097    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:13:57.626366    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:13:57.626430    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:13:57.637768    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:13:57.637836    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:13:57.648101    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:13:57.648170    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:13:57.658859    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:13:57.658924    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:13:57.669173    4490 logs.go:276] 0 containers: []
	W0904 13:13:57.669185    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:13:57.669242    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:13:57.680062    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:13:57.680081    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:13:57.680088    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:13:57.715827    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:13:57.715838    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:13:57.727227    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:13:57.727247    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:13:57.738972    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:13:57.738984    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:13:57.756914    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:13:57.756925    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:13:57.769028    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:13:57.769041    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:13:57.783178    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:13:57.783187    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:13:57.793896    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:13:57.793909    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:13:57.807507    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:13:57.807520    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:13:57.818933    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:13:57.818941    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:13:57.823226    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:13:57.823234    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:13:57.834061    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:13:57.834073    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:13:57.844874    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:13:57.844887    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:13:57.856235    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:13:57.856245    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:13:57.883255    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:13:57.883263    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:13:57.924310    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:13:57.924317    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:13:57.935596    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:13:57.935606    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:14:00.448907    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:14:05.451544    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:14:05.451955    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:14:05.487287    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:14:05.487424    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:14:05.509497    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:14:05.509592    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:14:05.524221    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:14:05.524302    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:14:05.536079    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:14:05.536156    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:14:05.558518    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:14:05.558579    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:14:05.570693    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:14:05.570753    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:14:05.581073    4490 logs.go:276] 0 containers: []
	W0904 13:14:05.581085    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:14:05.581141    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:14:05.592047    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:14:05.592065    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:14:05.592071    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:14:05.603840    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:14:05.603853    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:14:05.615700    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:14:05.615710    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:14:05.651351    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:14:05.651365    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:14:05.666089    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:14:05.666103    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:14:05.679790    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:14:05.679803    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:14:05.696917    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:14:05.696927    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:14:05.708769    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:14:05.708782    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:14:05.719738    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:14:05.719759    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:14:05.724598    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:14:05.724604    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:14:05.736098    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:14:05.736111    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:14:05.747045    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:14:05.747056    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:14:05.772001    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:14:05.772012    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:14:05.811609    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:14:05.811615    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:14:05.823462    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:14:05.823473    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:14:05.835359    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:14:05.835373    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:14:05.847363    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:14:05.847377    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:14:08.359124    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:14:13.360180    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:14:13.360552    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:14:13.393589    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:14:13.393707    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:14:13.413577    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:14:13.413669    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:14:13.428139    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:14:13.428204    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:14:13.440203    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:14:13.440276    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:14:13.453514    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:14:13.453575    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:14:13.464279    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:14:13.464346    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:14:13.474716    4490 logs.go:276] 0 containers: []
	W0904 13:14:13.474727    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:14:13.474777    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:14:13.486233    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:14:13.486252    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:14:13.486259    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:14:13.490790    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:14:13.490798    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:14:13.525031    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:14:13.525048    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:14:13.536830    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:14:13.536844    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:14:13.556449    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:14:13.556459    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:14:13.567831    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:14:13.567845    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:14:13.608204    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:14:13.608213    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:14:13.621243    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:14:13.621258    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:14:13.632448    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:14:13.632464    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:14:13.657077    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:14:13.657092    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:14:13.668821    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:14:13.668834    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:14:13.685543    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:14:13.685557    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:14:13.698525    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:14:13.698536    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:14:13.725559    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:14:13.725570    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:14:13.739344    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:14:13.739354    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:14:13.752875    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:14:13.752885    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:14:13.764810    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:14:13.764823    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:14:16.279243    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:14:21.281915    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:14:21.282335    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:14:21.318352    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:14:21.318477    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:14:21.343296    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:14:21.343386    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:14:21.357893    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:14:21.357970    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:14:21.369947    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:14:21.370020    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:14:21.381142    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:14:21.381208    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:14:21.391606    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:14:21.391667    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:14:21.401552    4490 logs.go:276] 0 containers: []
	W0904 13:14:21.401569    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:14:21.401625    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:14:21.412616    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:14:21.412635    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:14:21.412640    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:14:21.424030    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:14:21.424042    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:14:21.435629    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:14:21.435641    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:14:21.446814    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:14:21.446824    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:14:21.450932    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:14:21.450939    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:14:21.485381    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:14:21.485392    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:14:21.500238    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:14:21.500250    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:14:21.517454    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:14:21.517463    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:14:21.541895    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:14:21.541905    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:14:21.584281    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:14:21.584296    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:14:21.595524    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:14:21.595536    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:14:21.609011    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:14:21.609023    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:14:21.620219    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:14:21.620229    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:14:21.631570    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:14:21.631579    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:14:21.643591    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:14:21.643602    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:14:21.654497    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:14:21.654512    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:14:21.669639    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:14:21.669653    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:14:24.184066    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:14:29.185157    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:14:29.185392    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:14:29.207502    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:14:29.207598    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:14:29.224079    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:14:29.224154    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:14:29.237062    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:14:29.237128    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:14:29.247990    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:14:29.248058    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:14:29.258604    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:14:29.258668    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:14:29.269631    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:14:29.269703    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:14:29.279735    4490 logs.go:276] 0 containers: []
	W0904 13:14:29.279745    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:14:29.279800    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:14:29.290760    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:14:29.290785    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:14:29.290790    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:14:29.333062    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:14:29.333074    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:14:29.345193    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:14:29.345206    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:14:29.362417    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:14:29.362427    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:14:29.374080    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:14:29.374092    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:14:29.386203    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:14:29.386215    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:14:29.421634    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:14:29.421647    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:14:29.436036    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:14:29.436048    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:14:29.447083    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:14:29.447094    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:14:29.458685    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:14:29.458697    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:14:29.470058    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:14:29.470067    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:14:29.496181    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:14:29.496194    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:14:29.501117    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:14:29.501125    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:14:29.514769    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:14:29.514778    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:14:29.526524    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:14:29.526535    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:14:29.537961    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:14:29.537972    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:14:29.549597    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:14:29.549607    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:14:32.063402    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:14:37.065887    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0904 13:14:37.066049    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:14:37.077659    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:14:37.077739    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:14:37.088875    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:14:37.088948    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:14:37.100089    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:14:37.100153    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:14:37.110908    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:14:37.110996    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:14:37.121366    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:14:37.121431    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:14:37.131820    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:14:37.131881    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:14:37.141941    4490 logs.go:276] 0 containers: []
	W0904 13:14:37.141960    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:14:37.142018    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:14:37.152806    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:14:37.152822    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:14:37.152827    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:14:37.164783    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:14:37.164793    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:14:37.176542    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:14:37.176554    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:14:37.187718    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:14:37.187728    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:14:37.214034    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:14:37.214044    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:14:37.249068    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:14:37.249079    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:14:37.261736    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:14:37.261752    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:14:37.273838    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:14:37.273851    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:14:37.315519    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:14:37.315527    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:14:37.330027    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:14:37.330040    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:14:37.342264    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:14:37.342277    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:14:37.356726    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:14:37.356737    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:14:37.361712    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:14:37.361719    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:14:37.377264    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:14:37.377277    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:14:37.395640    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:14:37.395651    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:14:37.407613    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:14:37.407625    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:14:37.419177    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:14:37.419188    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:14:39.933746    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:14:44.936212    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:14:44.936377    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:14:44.948212    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:14:44.948290    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:14:44.959025    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:14:44.959103    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:14:44.969747    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:14:44.969817    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:14:44.980578    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:14:44.980641    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:14:44.991926    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:14:44.991991    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:14:45.011293    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:14:45.011362    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:14:45.021950    4490 logs.go:276] 0 containers: []
	W0904 13:14:45.021961    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:14:45.022014    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:14:45.032980    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:14:45.032998    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:14:45.033003    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:14:45.038006    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:14:45.038013    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:14:45.049797    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:14:45.049808    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:14:45.075101    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:14:45.075109    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:14:45.086890    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:14:45.086900    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:14:45.108908    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:14:45.108920    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:14:45.145646    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:14:45.145661    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:14:45.159960    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:14:45.159971    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:14:45.174734    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:14:45.174746    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:14:45.193370    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:14:45.193381    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:14:45.204242    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:14:45.204255    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:14:45.245219    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:14:45.245228    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:14:45.256534    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:14:45.256546    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:14:45.267979    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:14:45.267990    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:14:45.282732    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:14:45.282745    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:14:45.294136    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:14:45.294150    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:14:45.305397    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:14:45.305409    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:14:47.817889    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:14:52.819305    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:14:52.819430    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:14:52.831305    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:14:52.831381    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:14:52.843011    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:14:52.843079    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:14:52.854246    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:14:52.854322    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:14:52.866372    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:14:52.866451    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:14:52.888410    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:14:52.888483    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:14:52.900384    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:14:52.900465    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:14:52.912391    4490 logs.go:276] 0 containers: []
	W0904 13:14:52.912404    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:14:52.912464    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:14:52.924064    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:14:52.924083    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:14:52.924089    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:14:52.943816    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:14:52.943828    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:14:52.962796    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:14:52.962812    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:14:53.008830    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:14:53.008847    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:14:53.021940    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:14:53.021955    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:14:53.040877    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:14:53.040891    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:14:53.053546    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:14:53.053559    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:14:53.081384    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:14:53.081399    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:14:53.120709    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:14:53.120722    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:14:53.136724    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:14:53.136737    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:14:53.149327    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:14:53.149340    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:14:53.167318    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:14:53.167334    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:14:53.183409    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:14:53.183422    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:14:53.188252    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:14:53.188264    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:14:53.201756    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:14:53.201768    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:14:53.214453    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:14:53.214467    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:14:53.227254    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:14:53.227267    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:14:55.741020    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:15:00.743784    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:15:00.743967    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:15:00.761343    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:15:00.761426    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:15:00.780695    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:15:00.780772    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:15:00.791762    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:15:00.791833    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:15:00.802721    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:15:00.802804    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:15:00.817734    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:15:00.817804    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:15:00.828547    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:15:00.828616    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:15:00.838900    4490 logs.go:276] 0 containers: []
	W0904 13:15:00.838911    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:15:00.838968    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:15:00.849605    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:15:00.849626    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:15:00.849631    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:15:00.866979    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:15:00.866989    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:15:00.878599    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:15:00.878610    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:15:00.892904    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:15:00.892917    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:15:00.904819    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:15:00.904830    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:15:00.915925    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:15:00.915937    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:15:00.927357    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:15:00.927366    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:15:00.939569    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:15:00.939583    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:15:00.950859    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:15:00.950871    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:15:00.962703    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:15:00.962718    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:15:00.978716    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:15:00.978728    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:15:00.989720    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:15:00.989732    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:15:01.014372    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:15:01.014382    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:15:01.054874    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:15:01.054882    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:15:01.066971    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:15:01.066985    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:15:01.080776    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:15:01.080787    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:15:01.085257    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:15:01.085267    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:15:03.625283    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:15:08.627502    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:15:08.627613    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:15:08.638745    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:15:08.638823    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:15:08.650363    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:15:08.650438    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:15:08.661556    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:15:08.661624    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:15:08.673620    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:15:08.673695    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:15:08.685511    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:15:08.685585    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:15:08.697109    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:15:08.697182    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:15:08.708162    4490 logs.go:276] 0 containers: []
	W0904 13:15:08.708174    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:15:08.708239    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:15:08.727809    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:15:08.727828    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:15:08.727835    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:15:08.740764    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:15:08.740776    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:15:08.759070    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:15:08.759085    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:15:08.771327    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:15:08.771344    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:15:08.798121    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:15:08.798134    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:15:08.840904    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:15:08.840921    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:15:08.878453    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:15:08.878465    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:15:08.891587    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:15:08.891601    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:15:08.908167    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:15:08.908180    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:15:08.921043    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:15:08.921055    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:15:08.936039    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:15:08.936055    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:15:08.948323    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:15:08.948336    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:15:08.961047    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:15:08.961058    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:15:08.966198    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:15:08.966211    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:15:08.981611    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:15:08.981626    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:15:08.994222    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:15:08.994234    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:15:09.006195    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:15:09.006210    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:15:11.521172    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:15:16.523631    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:15:16.523844    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:15:16.539550    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:15:16.539641    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:15:16.552029    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:15:16.552110    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:15:16.563598    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:15:16.563669    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:15:16.573877    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:15:16.573947    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:15:16.584923    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:15:16.584989    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:15:16.595291    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:15:16.595359    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:15:16.605709    4490 logs.go:276] 0 containers: []
	W0904 13:15:16.605722    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:15:16.605782    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:15:16.620272    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:15:16.620290    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:15:16.620295    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:15:16.662611    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:15:16.662633    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:15:16.674471    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:15:16.674482    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:15:16.685461    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:15:16.685474    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:15:16.697188    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:15:16.697199    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:15:16.733524    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:15:16.733535    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:15:16.744848    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:15:16.744862    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:15:16.755821    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:15:16.755836    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:15:16.767029    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:15:16.767040    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:15:16.778676    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:15:16.778689    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:15:16.795359    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:15:16.795368    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:15:16.820186    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:15:16.820194    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:15:16.834825    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:15:16.834854    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:15:16.846051    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:15:16.846066    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:15:16.861450    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:15:16.861461    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:15:16.866177    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:15:16.866184    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:15:16.880593    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:15:16.880604    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:15:19.394398    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:15:24.397114    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:15:24.397554    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:15:24.438680    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:15:24.438817    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:15:24.459297    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:15:24.459379    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:15:24.473991    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:15:24.474068    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:15:24.487022    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:15:24.487096    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:15:24.497935    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:15:24.498006    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:15:24.512120    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:15:24.512189    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:15:24.524818    4490 logs.go:276] 0 containers: []
	W0904 13:15:24.524829    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:15:24.524890    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:15:24.535437    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:15:24.535454    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:15:24.535459    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:15:24.546952    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:15:24.546963    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:15:24.559914    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:15:24.559928    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:15:24.572405    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:15:24.572420    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:15:24.583658    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:15:24.583670    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:15:24.595360    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:15:24.595371    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:15:24.630206    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:15:24.630217    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:15:24.645250    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:15:24.645262    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:15:24.662926    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:15:24.662938    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:15:24.703159    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:15:24.703167    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:15:24.717391    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:15:24.717401    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:15:24.728934    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:15:24.728945    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:15:24.739931    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:15:24.739945    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:15:24.751239    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:15:24.751251    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:15:24.762511    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:15:24.762523    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:15:24.786223    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:15:24.786238    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:15:24.790910    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:15:24.790917    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:15:27.304336    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:15:32.306469    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:15:32.306582    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:15:32.317867    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:15:32.317935    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:15:32.328891    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:15:32.328963    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:15:32.339972    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:15:32.340047    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:15:32.350391    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:15:32.350459    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:15:32.361219    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:15:32.361280    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:15:32.371820    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:15:32.371893    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:15:32.383302    4490 logs.go:276] 0 containers: []
	W0904 13:15:32.383315    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:15:32.383383    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:15:32.396173    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:15:32.396194    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:15:32.396200    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:15:32.441273    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:15:32.441299    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:15:32.455468    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:15:32.455486    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:15:32.471144    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:15:32.471160    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:15:32.484883    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:15:32.484896    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:15:32.509788    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:15:32.509807    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:15:32.554524    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:15:32.554538    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:15:32.573009    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:15:32.573024    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:15:32.588912    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:15:32.588930    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:15:32.595458    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:15:32.595480    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:15:32.621138    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:15:32.621164    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:15:32.634110    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:15:32.634126    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:15:32.651686    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:15:32.651703    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:15:32.667851    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:15:32.667870    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:15:32.681382    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:15:32.681394    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:15:32.700226    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:15:32.700249    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:15:32.713709    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:15:32.713724    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:15:35.229811    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:15:40.231993    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:15:40.232174    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:15:40.252021    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:15:40.252096    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:15:40.264658    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:15:40.264728    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:15:40.276187    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:15:40.276257    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:15:40.286930    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:15:40.286992    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:15:40.297220    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:15:40.297283    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:15:40.308757    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:15:40.308823    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:15:40.318552    4490 logs.go:276] 0 containers: []
	W0904 13:15:40.318564    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:15:40.318619    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:15:40.329484    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:15:40.329502    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:15:40.329507    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:15:40.334386    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:15:40.334397    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:15:40.347062    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:15:40.347082    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:15:40.358134    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:15:40.358147    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:15:40.370033    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:15:40.370043    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:15:40.384503    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:15:40.384514    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:15:40.407972    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:15:40.407980    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:15:40.442854    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:15:40.442867    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:15:40.454836    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:15:40.454847    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:15:40.467001    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:15:40.467014    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:15:40.486701    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:15:40.486712    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:15:40.501228    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:15:40.501238    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:15:40.512247    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:15:40.512259    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:15:40.525153    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:15:40.525164    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:15:40.565822    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:15:40.565833    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:15:40.577843    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:15:40.577853    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:15:40.588907    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:15:40.588918    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:15:43.107781    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:15:48.109904    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:15:48.110009    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:15:48.121741    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:15:48.121815    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:15:48.132993    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:15:48.133072    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:15:48.144119    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:15:48.144194    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:15:48.160902    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:15:48.160978    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:15:48.178882    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:15:48.178967    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:15:48.189636    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:15:48.189705    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:15:48.204437    4490 logs.go:276] 0 containers: []
	W0904 13:15:48.204450    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:15:48.204513    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:15:48.219778    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:15:48.219797    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:15:48.219803    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:15:48.231238    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:15:48.231253    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:15:48.243188    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:15:48.243202    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:15:48.255085    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:15:48.255100    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:15:48.302928    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:15:48.302942    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:15:48.314104    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:15:48.314115    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:15:48.328096    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:15:48.328108    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:15:48.339556    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:15:48.339569    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:15:48.351330    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:15:48.351342    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:15:48.364712    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:15:48.364725    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:15:48.382810    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:15:48.382824    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:15:48.407701    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:15:48.407712    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:15:48.412021    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:15:48.412033    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:15:48.450901    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:15:48.450915    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:15:48.468216    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:15:48.468231    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:15:48.480353    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:15:48.480365    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:15:48.493294    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:15:48.493306    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:15:51.007630    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:15:56.009878    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:15:56.010205    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:15:56.044789    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:15:56.044937    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:15:56.065592    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:15:56.065686    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:15:56.080110    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:15:56.080189    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:15:56.091876    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:15:56.091945    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:15:56.102564    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:15:56.102628    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:15:56.116209    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:15:56.116284    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:15:56.127783    4490 logs.go:276] 0 containers: []
	W0904 13:15:56.127796    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:15:56.127858    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:15:56.137927    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:15:56.137945    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:15:56.137950    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:15:56.149794    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:15:56.149804    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:15:56.162906    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:15:56.162920    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:15:56.176358    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:15:56.176371    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:15:56.189542    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:15:56.189554    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:15:56.202460    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:15:56.202475    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:15:56.214840    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:15:56.214852    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:15:56.232537    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:15:56.232549    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:15:56.247026    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:15:56.247036    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:15:56.260136    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:15:56.260149    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:15:56.264280    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:15:56.264290    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:15:56.304101    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:15:56.304114    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:15:56.318265    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:15:56.318277    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:15:56.329692    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:15:56.329707    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:15:56.344636    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:15:56.344648    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:15:56.356309    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:15:56.356322    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:15:56.379937    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:15:56.379966    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:15:58.921908    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:03.924124    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:03.924342    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:16:03.950487    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:16:03.950614    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:16:03.968167    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:16:03.968261    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:16:03.981063    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:16:03.981138    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:16:03.992839    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:16:03.992912    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:16:04.003728    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:16:04.003801    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:16:04.014347    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:16:04.014425    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:16:04.027313    4490 logs.go:276] 0 containers: []
	W0904 13:16:04.027324    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:16:04.027374    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:16:04.039704    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:16:04.039721    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:16:04.039726    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:16:04.053639    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:16:04.053648    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:16:04.074572    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:16:04.074585    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:16:04.085871    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:16:04.085881    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:16:04.097221    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:16:04.097237    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:16:04.137923    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:16:04.137932    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:16:04.142513    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:16:04.142522    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:16:04.154506    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:16:04.154519    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:16:04.166684    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:16:04.166697    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:16:04.189102    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:16:04.189112    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:16:04.199893    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:16:04.199904    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:16:04.211239    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:16:04.211250    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:16:04.222531    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:16:04.222541    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:16:04.257307    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:16:04.257319    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:16:04.269192    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:16:04.269207    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:16:04.284560    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:16:04.284572    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:16:04.307411    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:16:04.307420    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:16:06.825469    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:11.827662    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:11.827806    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:16:11.840682    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:16:11.840746    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:16:11.851647    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:16:11.851718    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:16:11.862537    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:16:11.862613    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:16:11.874006    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:16:11.874084    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:16:11.884616    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:16:11.884696    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:16:11.895033    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:16:11.895096    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:16:11.905209    4490 logs.go:276] 0 containers: []
	W0904 13:16:11.905226    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:16:11.905282    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:16:11.915628    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:16:11.915648    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:16:11.915653    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:16:11.957172    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:16:11.957181    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:16:11.977958    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:16:11.977968    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:16:11.982589    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:16:11.982595    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:16:12.021840    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:16:12.021853    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:16:12.033339    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:16:12.033349    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:16:12.050159    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:16:12.050170    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:16:12.062416    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:16:12.062427    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:16:12.080281    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:16:12.080290    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:16:12.102647    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:16:12.102657    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:16:12.113954    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:16:12.113965    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:16:12.125840    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:16:12.125851    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:16:12.137119    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:16:12.137134    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:16:12.151600    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:16:12.151613    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:16:12.164558    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:16:12.164569    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:16:12.176432    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:16:12.176447    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:16:12.187612    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:16:12.187626    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:16:14.699341    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:19.701583    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:19.701666    4490 kubeadm.go:597] duration metric: took 4m4.762362208s to restartPrimaryControlPlane
	W0904 13:16:19.701752    4490 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0904 13:16:19.701785    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0904 13:16:20.677189    4490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 13:16:20.682367    4490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 13:16:20.685268    4490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 13:16:20.688357    4490 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 13:16:20.688363    4490 kubeadm.go:157] found existing configuration files:
	
	I0904 13:16:20.688386    4490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/admin.conf
	I0904 13:16:20.690919    4490 kubeadm.go:163] "https://control-plane.minikube.internal:50315" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 13:16:20.690946    4490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 13:16:20.693674    4490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/kubelet.conf
	I0904 13:16:20.696953    4490 kubeadm.go:163] "https://control-plane.minikube.internal:50315" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 13:16:20.696973    4490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 13:16:20.699915    4490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/controller-manager.conf
	I0904 13:16:20.702285    4490 kubeadm.go:163] "https://control-plane.minikube.internal:50315" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 13:16:20.702307    4490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 13:16:20.705293    4490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/scheduler.conf
	I0904 13:16:20.708304    4490 kubeadm.go:163] "https://control-plane.minikube.internal:50315" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 13:16:20.708328    4490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 13:16:20.711063    4490 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0904 13:16:20.728571    4490 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0904 13:16:20.728771    4490 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 13:16:20.799659    4490 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 13:16:20.799725    4490 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 13:16:20.799791    4490 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0904 13:16:20.849622    4490 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 13:16:20.853632    4490 out.go:235]   - Generating certificates and keys ...
	I0904 13:16:20.853665    4490 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 13:16:20.853698    4490 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 13:16:20.853735    4490 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0904 13:16:20.853763    4490 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0904 13:16:20.853801    4490 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0904 13:16:20.853838    4490 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0904 13:16:20.853877    4490 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0904 13:16:20.853909    4490 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0904 13:16:20.853949    4490 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0904 13:16:20.853988    4490 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0904 13:16:20.854007    4490 kubeadm.go:310] [certs] Using the existing "sa" key
	I0904 13:16:20.854032    4490 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 13:16:20.918226    4490 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 13:16:20.975827    4490 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 13:16:21.106365    4490 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 13:16:21.162358    4490 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 13:16:21.198194    4490 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 13:16:21.198381    4490 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 13:16:21.198475    4490 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 13:16:21.289942    4490 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 13:16:21.291628    4490 out.go:235]   - Booting up control plane ...
	I0904 13:16:21.291672    4490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 13:16:21.291712    4490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 13:16:21.291750    4490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 13:16:21.291822    4490 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 13:16:21.291922    4490 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0904 13:16:25.794742    4490 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503557 seconds
	I0904 13:16:25.794834    4490 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 13:16:25.799555    4490 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 13:16:26.314859    4490 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 13:16:26.315260    4490 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-478000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 13:16:26.819887    4490 kubeadm.go:310] [bootstrap-token] Using token: 7qgyum.uup81ppvceosqebq
	I0904 13:16:26.826405    4490 out.go:235]   - Configuring RBAC rules ...
	I0904 13:16:26.826467    4490 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 13:16:26.826520    4490 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 13:16:26.828551    4490 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 13:16:26.830441    4490 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 13:16:26.831451    4490 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 13:16:26.832285    4490 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 13:16:26.835701    4490 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 13:16:27.011341    4490 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 13:16:27.224213    4490 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 13:16:27.224649    4490 kubeadm.go:310] 
	I0904 13:16:27.224685    4490 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 13:16:27.224692    4490 kubeadm.go:310] 
	I0904 13:16:27.224740    4490 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 13:16:27.224745    4490 kubeadm.go:310] 
	I0904 13:16:27.224757    4490 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 13:16:27.224793    4490 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 13:16:27.224822    4490 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 13:16:27.224827    4490 kubeadm.go:310] 
	I0904 13:16:27.224856    4490 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 13:16:27.224861    4490 kubeadm.go:310] 
	I0904 13:16:27.224885    4490 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 13:16:27.224888    4490 kubeadm.go:310] 
	I0904 13:16:27.224924    4490 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 13:16:27.224971    4490 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 13:16:27.225007    4490 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 13:16:27.225010    4490 kubeadm.go:310] 
	I0904 13:16:27.225054    4490 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 13:16:27.225098    4490 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 13:16:27.225103    4490 kubeadm.go:310] 
	I0904 13:16:27.225148    4490 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7qgyum.uup81ppvceosqebq \
	I0904 13:16:27.225205    4490 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3feb851b3bc39caa9868530b83b064422b69401534f2eff748003ac6b1086498 \
	I0904 13:16:27.225217    4490 kubeadm.go:310] 	--control-plane 
	I0904 13:16:27.225221    4490 kubeadm.go:310] 
	I0904 13:16:27.225265    4490 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 13:16:27.225272    4490 kubeadm.go:310] 
	I0904 13:16:27.225310    4490 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7qgyum.uup81ppvceosqebq \
	I0904 13:16:27.225365    4490 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3feb851b3bc39caa9868530b83b064422b69401534f2eff748003ac6b1086498 
	I0904 13:16:27.225427    4490 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 13:16:27.225433    4490 cni.go:84] Creating CNI manager for ""
	I0904 13:16:27.225440    4490 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:16:27.229937    4490 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0904 13:16:27.236886    4490 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0904 13:16:27.240151    4490 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0904 13:16:27.245244    4490 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 13:16:27.245295    4490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 13:16:27.245325    4490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-478000 minikube.k8s.io/updated_at=2024_09_04T13_16_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af minikube.k8s.io/name=running-upgrade-478000 minikube.k8s.io/primary=true
	I0904 13:16:27.248390    4490 ops.go:34] apiserver oom_adj: -16
	I0904 13:16:27.291760    4490 kubeadm.go:1113] duration metric: took 46.500625ms to wait for elevateKubeSystemPrivileges
	I0904 13:16:27.291778    4490 kubeadm.go:394] duration metric: took 4m12.432398417s to StartCluster
	I0904 13:16:27.291788    4490 settings.go:142] acquiring lock: {Name:mk9e5d70c30d2e6b96e7a9eeb7ab14f5f9a1127e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:16:27.291885    4490 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:16:27.292293    4490 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/kubeconfig: {Name:mk2a8055a803f1d023c814308503721b85f2130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:16:27.292490    4490 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:16:27.292502    4490 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 13:16:27.292576    4490 config.go:182] Loaded profile config "running-upgrade-478000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:16:27.292590    4490 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-478000"
	I0904 13:16:27.292590    4490 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-478000"
	I0904 13:16:27.292605    4490 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-478000"
	I0904 13:16:27.292615    4490 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-478000"
	W0904 13:16:27.292619    4490 addons.go:243] addon storage-provisioner should already be in state true
	I0904 13:16:27.292630    4490 host.go:66] Checking if "running-upgrade-478000" exists ...
	I0904 13:16:27.300853    4490 out.go:177] * Verifying Kubernetes components...
	I0904 13:16:27.301055    4490 kapi.go:59] client config for running-upgrade-478000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/client.key", CAFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104957f80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 13:16:27.301404    4490 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-478000"
	W0904 13:16:27.301411    4490 addons.go:243] addon default-storageclass should already be in state true
	I0904 13:16:27.301421    4490 host.go:66] Checking if "running-upgrade-478000" exists ...
	I0904 13:16:27.302230    4490 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 13:16:27.304743    4490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 13:16:27.304751    4490 sshutil.go:53] new ssh client: &{IP:localhost Port:50283 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/running-upgrade-478000/id_rsa Username:docker}
	I0904 13:16:27.304931    4490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:16:27.310941    4490 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:16:27.312378    4490 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 13:16:27.312384    4490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 13:16:27.312394    4490 sshutil.go:53] new ssh client: &{IP:localhost Port:50283 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/running-upgrade-478000/id_rsa Username:docker}
	I0904 13:16:27.398875    4490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 13:16:27.403833    4490 api_server.go:52] waiting for apiserver process to appear ...
	I0904 13:16:27.403873    4490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:16:27.407856    4490 api_server.go:72] duration metric: took 115.35675ms to wait for apiserver process to appear ...
	I0904 13:16:27.407864    4490 api_server.go:88] waiting for apiserver healthz status ...
	I0904 13:16:27.407871    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:27.436722    4490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 13:16:27.439583    4490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 13:16:27.764533    4490 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0904 13:16:27.764545    4490 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0904 13:16:32.410073    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:32.410168    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:37.411393    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:37.411436    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:42.412097    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:42.412141    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:47.413440    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:47.413493    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:52.414358    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:52.414455    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:57.415989    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:57.416032    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0904 13:16:57.766394    4490 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0904 13:16:57.770935    4490 out.go:177] * Enabled addons: storage-provisioner
	I0904 13:16:57.779916    4490 addons.go:510] duration metric: took 30.487932375s for enable addons: enabled=[storage-provisioner]
	I0904 13:17:02.418094    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:02.418166    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:07.420553    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:07.420598    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:12.422259    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:12.422337    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:17.424758    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:17.424804    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:22.427131    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:22.427209    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:27.429708    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:27.429870    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:27.453012    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:17:27.453114    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:27.469840    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:17:27.469919    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:27.483061    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:17:27.483131    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:27.494432    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:17:27.494507    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:27.513461    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:17:27.513539    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:27.530230    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:17:27.530304    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:27.542099    4490 logs.go:276] 0 containers: []
	W0904 13:17:27.542110    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:27.542172    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:27.553008    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:17:27.553021    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:17:27.553027    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:17:27.571412    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:17:27.571424    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:17:27.587670    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:27.587680    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:27.611200    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:17:27.611210    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:17:27.623082    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:17:27.623097    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:17:27.638399    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:27.638412    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:27.674224    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:17:27.674235    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:17:27.688976    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:17:27.688988    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:17:27.703176    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:17:27.703189    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:17:27.714946    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:17:27.714957    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:17:27.726843    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:17:27.726854    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:27.740042    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:27.740053    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:27.779856    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:27.779869    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:30.286781    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:35.289043    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:35.289202    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:35.307716    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:17:35.307803    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:35.327031    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:17:35.327095    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:35.343439    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:17:35.343502    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:35.355158    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:17:35.355232    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:35.366352    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:17:35.366420    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:35.376905    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:17:35.376968    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:35.391228    4490 logs.go:276] 0 containers: []
	W0904 13:17:35.391239    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:35.391298    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:35.401853    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:17:35.401870    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:17:35.401875    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:17:35.416680    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:17:35.416693    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:17:35.431080    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:17:35.431094    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:17:35.452250    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:17:35.452262    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:17:35.465057    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:35.465070    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:35.490987    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:17:35.490999    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:35.502425    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:35.502436    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:35.541625    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:35.541636    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:35.545890    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:35.545899    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:35.582281    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:17:35.582295    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:17:35.594081    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:17:35.594092    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:17:35.606476    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:17:35.606488    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:17:35.624703    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:17:35.624714    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:17:38.138254    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:43.140281    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:43.140547    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:43.161870    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:17:43.161985    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:43.177648    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:17:43.177720    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:43.190406    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:17:43.190473    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:43.201611    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:17:43.201677    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:43.212174    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:17:43.212247    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:43.222861    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:17:43.222919    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:43.233401    4490 logs.go:276] 0 containers: []
	W0904 13:17:43.233412    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:43.233468    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:43.243901    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:17:43.243917    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:17:43.243923    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:17:43.255736    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:17:43.255746    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:17:43.266808    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:17:43.266818    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:17:43.278236    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:17:43.278246    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:17:43.294081    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:17:43.294091    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:17:43.307819    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:17:43.307829    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:17:43.322522    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:17:43.322532    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:17:43.334175    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:17:43.334185    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:17:43.353670    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:43.353680    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:43.390847    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:43.390857    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:43.394927    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:43.394936    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:43.428291    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:43.428302    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:43.451695    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:17:43.451705    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:45.965703    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:50.968272    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:50.968617    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:51.007064    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:17:51.007201    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:51.033636    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:17:51.033738    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:51.048933    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:17:51.049007    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:51.060780    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:17:51.060854    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:51.072415    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:17:51.072478    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:51.083094    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:17:51.083166    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:51.093701    4490 logs.go:276] 0 containers: []
	W0904 13:17:51.093713    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:51.093776    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:51.104759    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:17:51.104779    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:17:51.104785    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:17:51.115875    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:17:51.115889    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:17:51.130579    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:17:51.130590    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:17:51.152295    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:51.152306    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:51.177744    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:51.177753    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:51.182519    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:17:51.182528    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:17:51.197117    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:17:51.197130    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:17:51.212214    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:17:51.212223    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:17:51.224221    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:17:51.224232    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:17:51.236246    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:17:51.236256    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:17:51.248292    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:17:51.248302    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:51.259886    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:51.259897    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:51.299524    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:51.299536    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:53.841808    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:58.844112    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:58.844490    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:58.883725    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:17:58.883854    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:58.902739    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:17:58.902824    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:58.918032    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:17:58.918112    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:58.929758    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:17:58.929829    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:58.940119    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:17:58.940185    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:58.950891    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:17:58.950958    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:58.960982    4490 logs.go:276] 0 containers: []
	W0904 13:17:58.960994    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:58.961057    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:58.974567    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:17:58.974586    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:17:58.974592    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:17:58.986291    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:17:58.986303    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:58.997820    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:17:58.997833    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:17:59.009289    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:17:59.009303    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:17:59.024042    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:17:59.024055    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:17:59.035622    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:17:59.035634    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:17:59.050451    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:17:59.050464    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:17:59.064517    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:17:59.064532    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:17:59.079110    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:17:59.079124    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:17:59.096329    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:59.096339    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:59.120510    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:59.120519    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:59.158219    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:59.158232    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:59.162532    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:59.162542    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:01.700394    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:06.702594    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:06.702814    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:06.718738    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:18:06.718818    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:06.733122    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:18:06.733193    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:06.743754    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:18:06.743814    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:06.753950    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:18:06.754010    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:06.764900    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:18:06.764969    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:06.775793    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:18:06.775853    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:06.785852    4490 logs.go:276] 0 containers: []
	W0904 13:18:06.785861    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:06.785909    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:06.796474    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:18:06.796491    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:18:06.796498    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:18:06.810684    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:18:06.810698    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:18:06.821883    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:18:06.821893    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:18:06.836327    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:18:06.836338    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:18:06.854820    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:18:06.854835    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:18:06.869748    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:18:06.869762    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:06.881572    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:06.881585    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:06.918825    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:06.918835    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:06.954206    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:18:06.954218    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:18:06.969117    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:18:06.969132    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:18:06.981129    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:18:06.981139    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:18:06.993253    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:06.993264    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:07.016506    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:07.016514    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:09.521766    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:14.524129    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:14.524371    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:14.542848    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:18:14.542926    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:14.554950    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:18:14.555024    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:14.565519    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:18:14.565588    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:14.576268    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:18:14.576341    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:14.590458    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:18:14.590529    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:14.600789    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:18:14.600856    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:14.611682    4490 logs.go:276] 0 containers: []
	W0904 13:18:14.611692    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:14.611754    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:14.622988    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:18:14.623002    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:14.623007    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:14.660139    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:18:14.660153    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:18:14.675399    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:18:14.675410    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:18:14.687030    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:18:14.687040    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:18:14.703817    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:14.703828    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:14.743235    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:14.743249    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:14.747721    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:18:14.747730    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:18:14.761404    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:18:14.761417    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:18:14.773293    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:18:14.773308    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:18:14.784954    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:18:14.784965    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:18:14.799947    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:18:14.799957    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:18:14.811376    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:14.811390    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:14.836885    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:18:14.836895    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:17.350332    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:22.352674    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:22.352884    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:22.370232    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:18:22.370325    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:22.383890    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:18:22.383965    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:22.395400    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:18:22.395470    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:22.405536    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:18:22.405607    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:22.416641    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:18:22.416712    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:22.427206    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:18:22.427273    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:22.437204    4490 logs.go:276] 0 containers: []
	W0904 13:18:22.437214    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:22.437268    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:22.454467    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:18:22.454484    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:22.454489    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:22.492274    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:22.492285    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:22.496771    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:18:22.496779    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:18:22.508938    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:18:22.508954    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:18:22.526537    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:18:22.526551    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:18:22.538512    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:22.538527    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:22.562052    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:22.562063    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:22.597562    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:18:22.597574    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:18:22.612324    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:18:22.612337    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:18:22.626026    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:18:22.626040    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:18:22.637999    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:18:22.638011    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:18:22.650179    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:18:22.650192    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:18:22.668112    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:18:22.668123    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:25.181644    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:30.183816    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:30.183959    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:30.195123    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:18:30.195197    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:30.205785    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:18:30.205842    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:30.216252    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:18:30.216328    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:30.227060    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:18:30.227125    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:30.237192    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:18:30.237264    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:30.250355    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:18:30.250420    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:30.260768    4490 logs.go:276] 0 containers: []
	W0904 13:18:30.260779    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:30.260832    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:30.271238    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:18:30.271252    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:18:30.271257    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:18:30.289836    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:18:30.289847    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:18:30.302917    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:18:30.302932    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:18:30.317488    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:18:30.317501    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:18:30.329639    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:18:30.329651    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:30.341310    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:30.341325    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:30.364329    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:30.364337    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:30.401393    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:30.401402    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:30.405690    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:30.405699    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:30.441007    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:18:30.441018    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:18:30.454629    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:18:30.454642    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:18:30.466173    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:18:30.466186    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:18:30.479077    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:18:30.479091    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:18:33.002107    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:38.002671    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:38.002990    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:38.031515    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:18:38.031620    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:38.049052    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:18:38.049135    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:38.066312    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:18:38.066385    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:38.076891    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:18:38.076964    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:38.087956    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:18:38.088034    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:38.098788    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:18:38.098862    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:38.109105    4490 logs.go:276] 0 containers: []
	W0904 13:18:38.109116    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:38.109177    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:38.121137    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:18:38.121152    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:38.121157    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:38.158197    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:18:38.158206    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:18:38.173314    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:18:38.173326    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:18:38.185325    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:38.185339    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:38.189644    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:38.189654    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:38.224049    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:18:38.224062    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:18:38.239070    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:18:38.239079    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:18:38.253254    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:18:38.253268    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:18:38.266085    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:18:38.266094    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:18:38.280242    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:18:38.280254    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:18:38.299068    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:18:38.299078    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:18:38.310523    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:38.310533    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:38.333978    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:18:38.333991    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:40.848502    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:45.850699    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:45.850888    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:45.870402    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:18:45.870500    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:45.884581    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:18:45.884663    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:45.898657    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:18:45.898731    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:45.909492    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:18:45.909567    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:45.919931    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:18:45.919996    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:45.930503    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:18:45.930578    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:45.940703    4490 logs.go:276] 0 containers: []
	W0904 13:18:45.940714    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:45.940770    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:45.951305    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:18:45.951324    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:18:45.951330    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:18:45.962425    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:18:45.962437    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:18:45.974353    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:18:45.974365    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:18:45.989104    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:18:45.989114    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:18:46.001297    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:46.001308    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:46.026883    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:18:46.026893    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:18:46.041267    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:18:46.041280    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:18:46.054599    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:18:46.054612    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:18:46.073279    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:18:46.073290    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:18:46.084485    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:46.084498    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:46.124307    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:18:46.124317    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:18:46.135671    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:18:46.135684    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:46.151099    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:46.151109    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:46.187034    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:18:46.187047    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:18:46.199274    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:46.199289    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:48.706017    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:53.708428    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:53.708768    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:53.741688    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:18:53.741825    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:53.759028    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:18:53.759144    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:53.772537    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:18:53.772627    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:53.784381    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:18:53.784455    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:53.794891    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:18:53.794975    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:53.805853    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:18:53.805927    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:53.816382    4490 logs.go:276] 0 containers: []
	W0904 13:18:53.816394    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:53.816467    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:53.827865    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:18:53.827882    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:53.827887    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:53.864934    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:18:53.864945    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:18:53.879945    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:18:53.879954    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:18:53.891435    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:18:53.891446    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:18:53.902208    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:18:53.902218    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:18:53.913659    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:18:53.913669    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:18:53.925829    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:18:53.925837    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:18:53.940429    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:18:53.940442    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:18:53.953134    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:18:53.953148    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:18:53.967685    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:18:53.967696    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:53.979693    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:53.979704    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:54.017607    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:54.017616    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:54.022211    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:18:54.022218    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:18:54.033551    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:18:54.033562    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:18:54.050867    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:54.050878    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:56.575134    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:01.577318    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:01.577430    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:01.589062    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:19:01.589141    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:01.599913    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:19:01.599978    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:01.610697    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:19:01.610767    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:01.621139    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:19:01.621209    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:01.632180    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:19:01.632254    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:01.646239    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:19:01.646305    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:01.656525    4490 logs.go:276] 0 containers: []
	W0904 13:19:01.656535    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:01.656590    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:01.667075    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:19:01.667092    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:19:01.667098    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:19:01.678749    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:19:01.678758    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:19:01.689909    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:19:01.689922    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:19:01.706218    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:01.706228    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:01.743629    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:19:01.743642    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:19:01.754975    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:19:01.754990    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:19:01.767416    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:19:01.767428    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:19:01.782183    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:01.782196    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:01.786484    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:19:01.786493    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:19:01.798003    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:19:01.798014    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:19:01.809967    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:19:01.809981    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:19:01.827936    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:01.827947    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:01.853184    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:19:01.853193    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:19:01.867863    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:19:01.867879    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:01.879400    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:01.879414    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:04.418632    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:09.420992    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:09.421350    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:09.462354    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:19:09.462499    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:09.484513    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:19:09.484597    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:09.507649    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:19:09.507720    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:09.519426    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:19:09.519500    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:09.530087    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:19:09.530156    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:09.544699    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:19:09.544772    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:09.555307    4490 logs.go:276] 0 containers: []
	W0904 13:19:09.555327    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:09.555393    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:09.565998    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:19:09.566017    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:09.566022    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:09.601548    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:19:09.601560    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:19:09.613509    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:19:09.613522    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:19:09.625486    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:19:09.625496    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:19:09.639297    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:19:09.639311    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:19:09.651077    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:19:09.651088    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:19:09.669026    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:09.669039    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:09.692501    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:19:09.692513    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:19:09.707157    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:19:09.707171    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:19:09.722760    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:19:09.722771    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:19:09.734461    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:19:09.734472    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:09.746718    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:09.746732    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:09.785361    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:09.785368    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:09.790081    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:19:09.790092    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:19:09.801904    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:19:09.801915    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:19:12.320243    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:17.322559    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:17.322804    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:17.344448    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:19:17.344536    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:17.356642    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:19:17.356709    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:17.367608    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:19:17.367680    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:17.377813    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:19:17.377885    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:17.388632    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:19:17.388695    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:17.399317    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:19:17.399394    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:17.409308    4490 logs.go:276] 0 containers: []
	W0904 13:19:17.409321    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:17.409378    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:17.419770    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:19:17.419786    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:17.419792    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:17.424904    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:19:17.424910    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:19:17.439069    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:19:17.439080    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:19:17.450396    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:17.450404    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:17.473901    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:19:17.473910    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:17.485263    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:19:17.485275    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:19:17.499565    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:19:17.499577    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:19:17.510705    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:19:17.510719    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:19:17.523213    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:17.523226    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:17.562836    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:19:17.562848    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:19:17.589384    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:19:17.589394    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:19:17.603578    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:19:17.603588    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:19:17.622174    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:17.622185    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:17.657112    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:19:17.657124    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:19:17.669104    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:19:17.669115    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:19:20.183228    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:25.184032    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:25.184328    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:25.216718    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:19:25.216844    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:25.236452    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:19:25.236550    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:25.250735    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:19:25.250815    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:25.262470    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:19:25.262538    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:25.273267    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:19:25.273338    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:25.284326    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:19:25.284396    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:25.294483    4490 logs.go:276] 0 containers: []
	W0904 13:19:25.294494    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:25.294553    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:25.305290    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:19:25.305306    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:19:25.305312    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:19:25.316550    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:25.316564    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:25.340505    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:25.340516    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:25.377924    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:19:25.377933    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:19:25.398291    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:19:25.398304    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:19:25.424307    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:19:25.424317    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:19:25.444936    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:25.444945    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:25.481045    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:19:25.481056    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:19:25.493179    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:19:25.493193    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:19:25.507320    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:19:25.507334    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:19:25.522452    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:19:25.522465    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:19:25.534058    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:19:25.534069    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:19:25.545993    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:19:25.546004    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:25.558239    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:25.558250    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:25.562709    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:19:25.562720    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:19:28.078828    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:33.081132    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:33.081493    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:33.106381    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:19:33.106489    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:33.124474    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:19:33.124554    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:33.137687    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:19:33.137762    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:33.149337    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:19:33.149401    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:33.160071    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:19:33.160129    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:33.170170    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:19:33.170232    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:33.180325    4490 logs.go:276] 0 containers: []
	W0904 13:19:33.180337    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:33.180395    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:33.190684    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:19:33.190698    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:33.190703    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:33.228730    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:19:33.228743    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:19:33.243543    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:19:33.243552    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:19:33.258036    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:19:33.258050    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:19:33.269448    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:19:33.269463    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:33.281060    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:33.281075    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:33.305585    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:33.305592    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:33.309653    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:33.309660    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:33.345209    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:19:33.345220    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:19:33.357347    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:19:33.357357    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:19:33.372390    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:19:33.372404    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:19:33.384189    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:19:33.384200    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:19:33.401763    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:19:33.401773    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:19:33.413594    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:19:33.413606    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:19:33.427347    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:19:33.427359    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:19:35.943673    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:40.946022    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:40.946213    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:40.960192    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:19:40.960282    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:40.971827    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:19:40.971899    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:40.983003    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:19:40.983073    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:40.994038    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:19:40.994120    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:41.004530    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:19:41.004599    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:41.015722    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:19:41.015790    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:41.025609    4490 logs.go:276] 0 containers: []
	W0904 13:19:41.025620    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:41.025675    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:41.035615    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:19:41.035633    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:19:41.035638    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:19:41.047164    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:19:41.047174    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:19:41.058651    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:41.058662    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:41.095455    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:19:41.095465    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:19:41.109503    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:19:41.109513    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:19:41.121656    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:19:41.121669    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:19:41.133505    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:19:41.133516    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:19:41.151000    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:19:41.151010    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:41.162879    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:41.162889    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:41.200653    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:19:41.200666    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:19:41.220438    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:41.220452    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:41.225117    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:19:41.225126    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:19:41.240308    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:19:41.240316    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:19:41.251845    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:19:41.251857    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:19:41.270993    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:41.271004    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:43.797014    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:48.799492    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:48.799631    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:48.815587    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:19:48.815671    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:48.828301    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:19:48.828374    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:48.839426    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:19:48.839503    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:48.852761    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:19:48.852833    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:48.864341    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:19:48.864408    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:48.875083    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:19:48.875157    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:48.885797    4490 logs.go:276] 0 containers: []
	W0904 13:19:48.885810    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:48.885866    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:48.896054    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:19:48.896070    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:48.896075    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:48.935117    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:48.935125    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:48.939357    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:19:48.939366    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:19:48.953900    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:19:48.953911    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:19:48.971766    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:19:48.971775    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:48.984116    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:19:48.984125    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:19:48.998529    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:19:48.998537    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:19:49.012370    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:19:49.012380    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:19:49.024021    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:19:49.024030    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:19:49.035939    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:49.035949    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:49.075720    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:19:49.075732    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:19:49.087560    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:19:49.087570    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:19:49.099754    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:19:49.099764    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:19:49.111365    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:19:49.111376    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:19:49.122789    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:49.122799    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:51.648488    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:56.650688    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:56.650809    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:56.661893    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:19:56.661970    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:56.672816    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:19:56.672895    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:56.683999    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:19:56.684076    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:56.695455    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:19:56.695520    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:56.706402    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:19:56.706473    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:56.717531    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:19:56.717594    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:56.728211    4490 logs.go:276] 0 containers: []
	W0904 13:19:56.728223    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:56.728285    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:56.739375    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:19:56.739392    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:19:56.739396    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:19:56.751848    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:19:56.751863    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:19:56.764089    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:56.764102    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:56.790349    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:19:56.790366    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:19:56.803033    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:19:56.803047    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:19:56.819898    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:19:56.819910    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:56.833049    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:56.833060    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:56.871507    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:19:56.871517    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:19:56.887074    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:19:56.887086    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:19:56.901797    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:56.901808    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:56.906572    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:19:56.906581    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:19:56.921581    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:19:56.921592    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:19:56.937345    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:56.937356    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:56.976571    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:19:56.976582    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:19:56.988896    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:19:56.988906    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:19:59.509052    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:04.511190    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:04.511315    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:20:04.522285    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:20:04.522365    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:20:04.533295    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:20:04.533362    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:20:04.543782    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:20:04.543859    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:20:04.554637    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:20:04.554709    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:20:04.565323    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:20:04.565394    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:20:04.575866    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:20:04.575932    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:20:04.585837    4490 logs.go:276] 0 containers: []
	W0904 13:20:04.585848    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:20:04.585907    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:20:04.596134    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:20:04.596150    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:20:04.596155    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:20:04.619227    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:20:04.619238    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:20:04.633204    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:20:04.633215    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:20:04.670895    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:20:04.670909    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:20:04.681499    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:20:04.681517    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:20:04.715830    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:20:04.715844    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:20:04.730759    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:20:04.730773    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:20:04.752222    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:20:04.752233    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:20:04.764030    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:20:04.764043    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:20:04.775727    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:20:04.775736    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:20:04.787789    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:20:04.787808    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:20:04.799346    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:20:04.799359    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:20:04.811218    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:20:04.811228    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:20:04.848790    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:20:04.848803    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:20:04.866892    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:20:04.866904    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:20:07.380679    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:12.382849    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:12.382984    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:20:12.396075    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:20:12.396153    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:20:12.408229    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:20:12.408300    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:20:12.418549    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:20:12.418622    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:20:12.428752    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:20:12.428820    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:20:12.439088    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:20:12.439172    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:20:12.450489    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:20:12.450560    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:20:12.461888    4490 logs.go:276] 0 containers: []
	W0904 13:20:12.461900    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:20:12.461964    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:20:12.473650    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:20:12.473668    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:20:12.473674    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:20:12.487550    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:20:12.487563    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:20:12.499748    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:20:12.499758    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:20:12.511774    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:20:12.511785    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:20:12.529575    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:20:12.529586    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:20:12.542052    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:20:12.542063    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:20:12.554786    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:20:12.554797    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:20:12.566420    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:20:12.566430    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:20:12.581004    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:20:12.581014    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:20:12.606379    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:20:12.606392    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:20:12.643592    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:20:12.643611    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:20:12.680468    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:20:12.680482    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:20:12.692555    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:20:12.692569    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:20:12.696986    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:20:12.696993    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:20:12.711580    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:20:12.711593    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:20:15.225159    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:20.227313    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:20.227447    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:20:20.241412    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:20:20.241494    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:20:20.252734    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:20:20.252803    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:20:20.263098    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:20:20.263172    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:20:20.274483    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:20:20.274568    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:20:20.284887    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:20:20.284960    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:20:20.295400    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:20:20.295473    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:20:20.305760    4490 logs.go:276] 0 containers: []
	W0904 13:20:20.305771    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:20:20.305823    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:20:20.316097    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:20:20.316113    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:20:20.316119    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:20:20.354107    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:20:20.354120    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:20:20.365488    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:20:20.365498    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:20:20.377054    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:20:20.377066    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:20:20.390413    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:20:20.390426    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:20:20.402259    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:20:20.402269    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:20:20.419503    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:20:20.419513    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:20:20.431396    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:20:20.431408    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:20:20.444279    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:20:20.444291    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:20:20.460471    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:20:20.460485    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:20:20.471960    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:20:20.471975    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:20:20.483309    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:20:20.483320    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:20:20.497897    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:20:20.497907    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:20:20.520323    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:20:20.520332    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:20:20.557464    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:20:20.557472    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:20:23.063485    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:28.065625    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:28.071208    4490 out.go:201] 
	W0904 13:20:28.075118    4490 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0904 13:20:28.075129    4490 out.go:270] * 
	* 
	W0904 13:20:28.076039    4490 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:20:28.087069    4490 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-478000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-09-04 13:20:28.195827 -0700 PDT m=+3345.262529084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-478000 -n running-upgrade-478000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-478000 -n running-upgrade-478000: exit status 2 (15.680240042s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-478000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-747000          | force-systemd-flag-747000 | jenkins | v1.34.0 | 04 Sep 24 13:10 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-978000              | force-systemd-env-978000  | jenkins | v1.34.0 | 04 Sep 24 13:10 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-978000           | force-systemd-env-978000  | jenkins | v1.34.0 | 04 Sep 24 13:10 PDT | 04 Sep 24 13:10 PDT |
	| start   | -p docker-flags-174000                | docker-flags-174000       | jenkins | v1.34.0 | 04 Sep 24 13:10 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-747000             | force-systemd-flag-747000 | jenkins | v1.34.0 | 04 Sep 24 13:10 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-747000          | force-systemd-flag-747000 | jenkins | v1.34.0 | 04 Sep 24 13:10 PDT | 04 Sep 24 13:10 PDT |
	| start   | -p cert-expiration-733000             | cert-expiration-733000    | jenkins | v1.34.0 | 04 Sep 24 13:10 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-174000 ssh               | docker-flags-174000       | jenkins | v1.34.0 | 04 Sep 24 13:10 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-174000 ssh               | docker-flags-174000       | jenkins | v1.34.0 | 04 Sep 24 13:10 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-174000                | docker-flags-174000       | jenkins | v1.34.0 | 04 Sep 24 13:10 PDT | 04 Sep 24 13:10 PDT |
	| start   | -p cert-options-659000                | cert-options-659000       | jenkins | v1.34.0 | 04 Sep 24 13:10 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-659000 ssh               | cert-options-659000       | jenkins | v1.34.0 | 04 Sep 24 13:11 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-659000 -- sudo        | cert-options-659000       | jenkins | v1.34.0 | 04 Sep 24 13:11 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-659000                | cert-options-659000       | jenkins | v1.34.0 | 04 Sep 24 13:11 PDT | 04 Sep 24 13:11 PDT |
	| start   | -p running-upgrade-478000             | minikube                  | jenkins | v1.26.0 | 04 Sep 24 13:11 PDT | 04 Sep 24 13:12 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-478000             | running-upgrade-478000    | jenkins | v1.34.0 | 04 Sep 24 13:12 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-733000             | cert-expiration-733000    | jenkins | v1.34.0 | 04 Sep 24 13:14 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-733000             | cert-expiration-733000    | jenkins | v1.34.0 | 04 Sep 24 13:14 PDT | 04 Sep 24 13:14 PDT |
	| start   | -p kubernetes-upgrade-895000          | kubernetes-upgrade-895000 | jenkins | v1.34.0 | 04 Sep 24 13:14 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-895000          | kubernetes-upgrade-895000 | jenkins | v1.34.0 | 04 Sep 24 13:14 PDT | 04 Sep 24 13:14 PDT |
	| start   | -p kubernetes-upgrade-895000          | kubernetes-upgrade-895000 | jenkins | v1.34.0 | 04 Sep 24 13:14 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-895000          | kubernetes-upgrade-895000 | jenkins | v1.34.0 | 04 Sep 24 13:14 PDT | 04 Sep 24 13:14 PDT |
	| start   | -p stopped-upgrade-175000             | minikube                  | jenkins | v1.26.0 | 04 Sep 24 13:14 PDT | 04 Sep 24 13:15 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-175000 stop           | minikube                  | jenkins | v1.26.0 | 04 Sep 24 13:15 PDT | 04 Sep 24 13:15 PDT |
	| start   | -p stopped-upgrade-175000             | stopped-upgrade-175000    | jenkins | v1.34.0 | 04 Sep 24 13:15 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/04 13:15:21
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 13:15:21.440044    4660 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:15:21.440187    4660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:15:21.440191    4660 out.go:358] Setting ErrFile to fd 2...
	I0904 13:15:21.440194    4660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:15:21.440336    4660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:15:21.441499    4660 out.go:352] Setting JSON to false
	I0904 13:15:21.461219    4660 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4485,"bootTime":1725476436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:15:21.461305    4660 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:15:21.466578    4660 out.go:177] * [stopped-upgrade-175000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:15:21.474444    4660 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:15:21.474501    4660 notify.go:220] Checking for updates...
	I0904 13:15:21.481523    4660 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:15:21.484530    4660 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:15:21.487574    4660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:15:21.490498    4660 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:15:21.493523    4660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:15:21.496668    4660 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:15:21.499455    4660 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0904 13:15:21.502532    4660 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:15:21.505368    4660 out.go:177] * Using the qemu2 driver based on existing profile
	I0904 13:15:21.512496    4660 start.go:297] selected driver: qemu2
	I0904 13:15:21.512502    4660 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-175000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50564 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-175000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0904 13:15:21.512552    4660 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:15:21.515184    4660 cni.go:84] Creating CNI manager for ""
	I0904 13:15:21.515201    4660 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:15:21.515228    4660 start.go:340] cluster config:
	{Name:stopped-upgrade-175000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50564 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-175000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0904 13:15:21.515280    4660 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:15:21.523454    4660 out.go:177] * Starting "stopped-upgrade-175000" primary control-plane node in "stopped-upgrade-175000" cluster
	I0904 13:15:21.527521    4660 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0904 13:15:21.527538    4660 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0904 13:15:21.527546    4660 cache.go:56] Caching tarball of preloaded images
	I0904 13:15:21.527607    4660 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:15:21.527618    4660 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0904 13:15:21.527678    4660 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/config.json ...
	I0904 13:15:21.528196    4660 start.go:360] acquireMachinesLock for stopped-upgrade-175000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:15:21.528228    4660 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "stopped-upgrade-175000"
	I0904 13:15:21.528238    4660 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:15:21.528243    4660 fix.go:54] fixHost starting: 
	I0904 13:15:21.528349    4660 fix.go:112] recreateIfNeeded on stopped-upgrade-175000: state=Stopped err=<nil>
	W0904 13:15:21.528357    4660 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:15:21.532535    4660 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-175000" ...
	I0904 13:15:24.397114    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:15:24.397554    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:15:24.438680    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:15:24.438817    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:15:24.459297    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:15:24.459379    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:15:24.473991    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:15:24.474068    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:15:24.487022    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:15:24.487096    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:15:24.497935    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:15:24.498006    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:15:24.512120    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:15:24.512189    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:15:24.524818    4490 logs.go:276] 0 containers: []
	W0904 13:15:24.524829    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:15:24.524890    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:15:24.535437    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:15:24.535454    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:15:24.535459    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:15:24.546952    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:15:24.546963    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:15:24.559914    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:15:24.559928    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:15:24.572405    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:15:24.572420    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:15:24.583658    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:15:24.583670    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:15:24.595360    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:15:24.595371    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:15:24.630206    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:15:24.630217    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:15:24.645250    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:15:24.645262    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:15:24.662926    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:15:24.662938    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:15:24.703159    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:15:24.703167    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:15:24.717391    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:15:24.717401    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:15:24.728934    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:15:24.728945    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:15:24.739931    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:15:24.739945    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:15:24.751239    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:15:24.751251    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:15:24.762511    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:15:24.762523    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:15:24.786223    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:15:24.786238    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:15:24.790910    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:15:24.790917    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:15:21.540521    4660 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:15:21.540607    4660 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50529-:22,hostfwd=tcp::50530-:2376,hostname=stopped-upgrade-175000 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/disk.qcow2
	I0904 13:15:21.586534    4660 main.go:141] libmachine: STDOUT: 
	I0904 13:15:21.586567    4660 main.go:141] libmachine: STDERR: 
	I0904 13:15:21.586572    4660 main.go:141] libmachine: Waiting for VM to start (ssh -p 50529 docker@127.0.0.1)...
	I0904 13:15:27.304336    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:15:32.306469    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:15:32.306582    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:15:32.317867    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:15:32.317935    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:15:32.328891    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:15:32.328963    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:15:32.339972    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:15:32.340047    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:15:32.350391    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:15:32.350459    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:15:32.361219    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:15:32.361280    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:15:32.371820    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:15:32.371893    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:15:32.383302    4490 logs.go:276] 0 containers: []
	W0904 13:15:32.383315    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:15:32.383383    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:15:32.396173    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:15:32.396194    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:15:32.396200    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:15:32.441273    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:15:32.441299    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:15:32.455468    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:15:32.455486    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:15:32.471144    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:15:32.471160    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:15:32.484883    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:15:32.484896    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:15:32.509788    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:15:32.509807    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:15:32.554524    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:15:32.554538    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:15:32.573009    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:15:32.573024    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:15:32.588912    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:15:32.588930    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:15:32.595458    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:15:32.595480    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:15:32.621138    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:15:32.621164    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:15:32.634110    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:15:32.634126    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:15:32.651686    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:15:32.651703    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:15:32.667851    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:15:32.667870    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:15:32.681382    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:15:32.681394    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:15:32.700226    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:15:32.700249    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:15:32.713709    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:15:32.713724    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:15:35.229811    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:15:40.231993    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:15:40.232174    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:15:40.252021    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:15:40.252096    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:15:40.264658    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:15:40.264728    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:15:40.276187    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:15:40.276257    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:15:40.286930    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:15:40.286992    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:15:40.297220    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:15:40.297283    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:15:40.308757    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:15:40.308823    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:15:40.318552    4490 logs.go:276] 0 containers: []
	W0904 13:15:40.318564    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:15:40.318619    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:15:40.329484    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:15:40.329502    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:15:40.329507    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:15:40.334386    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:15:40.334397    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:15:40.347062    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:15:40.347082    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:15:40.358134    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:15:40.358147    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:15:40.370033    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:15:40.370043    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:15:40.384503    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:15:40.384514    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:15:40.407972    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:15:40.407980    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:15:40.442854    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:15:40.442867    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:15:40.454836    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:15:40.454847    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:15:40.467001    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:15:40.467014    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:15:40.486701    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:15:40.486712    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:15:40.501228    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:15:40.501238    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:15:40.512247    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:15:40.512259    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:15:40.525153    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:15:40.525164    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:15:40.565822    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:15:40.565833    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:15:40.577843    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:15:40.577853    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:15:40.588907    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:15:40.588918    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:15:41.409785    4660 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/config.json ...
	I0904 13:15:41.410336    4660 machine.go:93] provisionDockerMachine start ...
	I0904 13:15:41.410437    4660 main.go:141] libmachine: Using SSH client type: native
	I0904 13:15:41.410818    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bc45a0] 0x100bc6e00 <nil>  [] 0s} localhost 50529 <nil> <nil>}
	I0904 13:15:41.410829    4660 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 13:15:41.490053    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0904 13:15:41.490083    4660 buildroot.go:166] provisioning hostname "stopped-upgrade-175000"
	I0904 13:15:41.490184    4660 main.go:141] libmachine: Using SSH client type: native
	I0904 13:15:41.490383    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bc45a0] 0x100bc6e00 <nil>  [] 0s} localhost 50529 <nil> <nil>}
	I0904 13:15:41.490394    4660 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-175000 && echo "stopped-upgrade-175000" | sudo tee /etc/hostname
	I0904 13:15:41.564232    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-175000
	
	I0904 13:15:41.564283    4660 main.go:141] libmachine: Using SSH client type: native
	I0904 13:15:41.564397    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bc45a0] 0x100bc6e00 <nil>  [] 0s} localhost 50529 <nil> <nil>}
	I0904 13:15:41.564406    4660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-175000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-175000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-175000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 13:15:41.630311    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 13:15:41.630324    4660 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19575-1140/.minikube CaCertPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19575-1140/.minikube}
	I0904 13:15:41.630332    4660 buildroot.go:174] setting up certificates
	I0904 13:15:41.630336    4660 provision.go:84] configureAuth start
	I0904 13:15:41.630341    4660 provision.go:143] copyHostCerts
	I0904 13:15:41.630424    4660 exec_runner.go:144] found /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.pem, removing ...
	I0904 13:15:41.630430    4660 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.pem
	I0904 13:15:41.630894    4660 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.pem (1078 bytes)
	I0904 13:15:41.631132    4660 exec_runner.go:144] found /Users/jenkins/minikube-integration/19575-1140/.minikube/cert.pem, removing ...
	I0904 13:15:41.631136    4660 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19575-1140/.minikube/cert.pem
	I0904 13:15:41.631200    4660 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19575-1140/.minikube/cert.pem (1123 bytes)
	I0904 13:15:41.631335    4660 exec_runner.go:144] found /Users/jenkins/minikube-integration/19575-1140/.minikube/key.pem, removing ...
	I0904 13:15:41.631338    4660 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19575-1140/.minikube/key.pem
	I0904 13:15:41.631396    4660 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19575-1140/.minikube/key.pem (1675 bytes)
	I0904 13:15:41.631489    4660 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-175000 san=[127.0.0.1 localhost minikube stopped-upgrade-175000]
	I0904 13:15:41.835985    4660 provision.go:177] copyRemoteCerts
	I0904 13:15:41.836031    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 13:15:41.836041    4660 sshutil.go:53] new ssh client: &{IP:localhost Port:50529 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/id_rsa Username:docker}
	I0904 13:15:41.870089    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 13:15:41.876906    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0904 13:15:41.883525    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0904 13:15:41.890338    4660 provision.go:87] duration metric: took 259.996875ms to configureAuth
	I0904 13:15:41.890347    4660 buildroot.go:189] setting minikube options for container-runtime
	I0904 13:15:41.890444    4660 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:15:41.890479    4660 main.go:141] libmachine: Using SSH client type: native
	I0904 13:15:41.890572    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bc45a0] 0x100bc6e00 <nil>  [] 0s} localhost 50529 <nil> <nil>}
	I0904 13:15:41.890576    4660 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0904 13:15:41.955514    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0904 13:15:41.955523    4660 buildroot.go:70] root file system type: tmpfs
	I0904 13:15:41.955578    4660 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0904 13:15:41.955645    4660 main.go:141] libmachine: Using SSH client type: native
	I0904 13:15:41.955758    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bc45a0] 0x100bc6e00 <nil>  [] 0s} localhost 50529 <nil> <nil>}
	I0904 13:15:41.955796    4660 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0904 13:15:42.022832    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0904 13:15:42.022890    4660 main.go:141] libmachine: Using SSH client type: native
	I0904 13:15:42.023008    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bc45a0] 0x100bc6e00 <nil>  [] 0s} localhost 50529 <nil> <nil>}
	I0904 13:15:42.023017    4660 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0904 13:15:42.362148    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0904 13:15:42.362160    4660 machine.go:96] duration metric: took 951.829833ms to provisionDockerMachine
	I0904 13:15:42.362167    4660 start.go:293] postStartSetup for "stopped-upgrade-175000" (driver="qemu2")
	I0904 13:15:42.362173    4660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 13:15:42.362237    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 13:15:42.362246    4660 sshutil.go:53] new ssh client: &{IP:localhost Port:50529 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/id_rsa Username:docker}
	I0904 13:15:42.397712    4660 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 13:15:42.398981    4660 info.go:137] Remote host: Buildroot 2021.02.12
	I0904 13:15:42.398988    4660 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19575-1140/.minikube/addons for local assets ...
	I0904 13:15:42.399073    4660 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19575-1140/.minikube/files for local assets ...
	I0904 13:15:42.399201    4660 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19575-1140/.minikube/files/etc/ssl/certs/16612.pem -> 16612.pem in /etc/ssl/certs
	I0904 13:15:42.399326    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 13:15:42.402204    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/files/etc/ssl/certs/16612.pem --> /etc/ssl/certs/16612.pem (1708 bytes)
	I0904 13:15:42.409281    4660 start.go:296] duration metric: took 47.110375ms for postStartSetup
	I0904 13:15:42.409294    4660 fix.go:56] duration metric: took 20.881357458s for fixHost
	I0904 13:15:42.409324    4660 main.go:141] libmachine: Using SSH client type: native
	I0904 13:15:42.409427    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bc45a0] 0x100bc6e00 <nil>  [] 0s} localhost 50529 <nil> <nil>}
	I0904 13:15:42.409432    4660 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 13:15:42.475839    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725480942.098003921
	
	I0904 13:15:42.475848    4660 fix.go:216] guest clock: 1725480942.098003921
	I0904 13:15:42.475853    4660 fix.go:229] Guest: 2024-09-04 13:15:42.098003921 -0700 PDT Remote: 2024-09-04 13:15:42.409295 -0700 PDT m=+20.999083751 (delta=-311.291079ms)
	I0904 13:15:42.475866    4660 fix.go:200] guest clock delta is within tolerance: -311.291079ms
	I0904 13:15:42.475868    4660 start.go:83] releasing machines lock for "stopped-upgrade-175000", held for 20.94794175s
	I0904 13:15:42.475944    4660 ssh_runner.go:195] Run: cat /version.json
	I0904 13:15:42.475956    4660 sshutil.go:53] new ssh client: &{IP:localhost Port:50529 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/id_rsa Username:docker}
	I0904 13:15:42.475945    4660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 13:15:42.475993    4660 sshutil.go:53] new ssh client: &{IP:localhost Port:50529 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/id_rsa Username:docker}
	W0904 13:15:42.476630    4660 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50529: connect: connection refused
	I0904 13:15:42.476652    4660 retry.go:31] will retry after 230.38462ms: dial tcp [::1]:50529: connect: connection refused
	W0904 13:15:42.744588    4660 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0904 13:15:42.744685    4660 ssh_runner.go:195] Run: systemctl --version
	I0904 13:15:42.747262    4660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 13:15:42.749814    4660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 13:15:42.749853    4660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0904 13:15:42.754234    4660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0904 13:15:42.760437    4660 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0904 13:15:42.760447    4660 start.go:495] detecting cgroup driver to use...
	I0904 13:15:42.760518    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 13:15:42.768683    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0904 13:15:42.772381    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0904 13:15:42.775312    4660 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0904 13:15:42.775334    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0904 13:15:42.778469    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 13:15:42.781655    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0904 13:15:42.784681    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 13:15:42.787356    4660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 13:15:42.790458    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0904 13:15:42.793691    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0904 13:15:42.796662    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0904 13:15:42.799419    4660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 13:15:42.802350    4660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 13:15:42.805271    4660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:15:42.886931    4660 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0904 13:15:42.893417    4660 start.go:495] detecting cgroup driver to use...
	I0904 13:15:42.893480    4660 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0904 13:15:42.901682    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 13:15:42.906112    4660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 13:15:42.913130    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 13:15:42.917830    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 13:15:42.922382    4660 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0904 13:15:42.966079    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 13:15:42.971531    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 13:15:42.977232    4660 ssh_runner.go:195] Run: which cri-dockerd
	I0904 13:15:42.978292    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0904 13:15:42.980977    4660 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0904 13:15:42.985924    4660 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0904 13:15:43.063126    4660 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0904 13:15:43.130918    4660 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0904 13:15:43.130973    4660 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0904 13:15:43.136216    4660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:15:43.201303    4660 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0904 13:15:44.362251    4660 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.160950375s)
	I0904 13:15:44.362311    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0904 13:15:44.367068    4660 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0904 13:15:44.375310    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 13:15:44.380335    4660 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0904 13:15:44.450081    4660 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0904 13:15:44.514664    4660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:15:44.578322    4660 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0904 13:15:44.584734    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 13:15:44.588937    4660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:15:44.642106    4660 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0904 13:15:44.683033    4660 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0904 13:15:44.683113    4660 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0904 13:15:44.686399    4660 start.go:563] Will wait 60s for crictl version
	I0904 13:15:44.686455    4660 ssh_runner.go:195] Run: which crictl
	I0904 13:15:44.687698    4660 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 13:15:44.702300    4660 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0904 13:15:44.702371    4660 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 13:15:44.717769    4660 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 13:15:43.107781    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:15:44.737646    4660 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0904 13:15:44.737730    4660 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0904 13:15:44.739207    4660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 13:15:44.743187    4660 kubeadm.go:883] updating cluster {Name:stopped-upgrade-175000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50564 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-175000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0904 13:15:44.743237    4660 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0904 13:15:44.743285    4660 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0904 13:15:44.754344    4660 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0904 13:15:44.754358    4660 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0904 13:15:44.754406    4660 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0904 13:15:44.758163    4660 ssh_runner.go:195] Run: which lz4
	I0904 13:15:44.759363    4660 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0904 13:15:44.760776    4660 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0904 13:15:44.760791    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0904 13:15:45.691201    4660 docker.go:649] duration metric: took 931.879167ms to copy over tarball
	I0904 13:15:45.691257    4660 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0904 13:15:48.109904    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:15:48.110009    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:15:48.121741    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:15:48.121815    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:15:48.132993    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:15:48.133072    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:15:48.144119    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:15:48.144194    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:15:48.160902    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:15:48.160978    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:15:48.178882    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:15:48.178967    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:15:48.189636    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:15:48.189705    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:15:48.204437    4490 logs.go:276] 0 containers: []
	W0904 13:15:48.204450    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:15:48.204513    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:15:48.219778    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:15:48.219797    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:15:48.219803    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:15:48.231238    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:15:48.231253    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:15:48.243188    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:15:48.243202    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:15:48.255085    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:15:48.255100    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:15:48.302928    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:15:48.302942    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:15:48.314104    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:15:48.314115    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:15:48.328096    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:15:48.328108    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:15:48.339556    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:15:48.339569    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:15:48.351330    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:15:48.351342    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:15:48.364712    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:15:48.364725    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:15:48.382810    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:15:48.382824    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:15:48.407701    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:15:48.407712    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:15:48.412021    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:15:48.412033    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:15:48.450901    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:15:48.450915    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:15:48.468216    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:15:48.468231    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:15:48.480353    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:15:48.480365    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:15:48.493294    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:15:48.493306    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:15:46.843867    4660 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.152615375s)
	I0904 13:15:46.843885    4660 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0904 13:15:46.859292    4660 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0904 13:15:46.862604    4660 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0904 13:15:46.867786    4660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:15:46.933711    4660 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0904 13:15:48.595185    4660 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.661470667s)
	I0904 13:15:48.595270    4660 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0904 13:15:48.608213    4660 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0904 13:15:48.608222    4660 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0904 13:15:48.608228    4660 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0904 13:15:48.612694    4660 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:15:48.614342    4660 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0904 13:15:48.616684    4660 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0904 13:15:48.616726    4660 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:15:48.618333    4660 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0904 13:15:48.618348    4660 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0904 13:15:48.619539    4660 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0904 13:15:48.619645    4660 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0904 13:15:48.621097    4660 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0904 13:15:48.621323    4660 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:15:48.622308    4660 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0904 13:15:48.622420    4660 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0904 13:15:48.623324    4660 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0904 13:15:48.623593    4660 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:15:48.624555    4660 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0904 13:15:48.625447    4660 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0904 13:15:49.045018    4660 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0904 13:15:49.056672    4660 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0904 13:15:49.056697    4660 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0904 13:15:49.056746    4660 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0904 13:15:49.066938    4660 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0904 13:15:49.067331    4660 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0904 13:15:49.069242    4660 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0904 13:15:49.069252    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0904 13:15:49.070083    4660 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0904 13:15:49.077677    4660 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0904 13:15:49.077689    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0904 13:15:49.080961    4660 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0904 13:15:49.081408    4660 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0904 13:15:49.082576    4660 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0904 13:15:49.082592    4660 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0904 13:15:49.082620    4660 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0904 13:15:49.085464    4660 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0904 13:15:49.085576    4660 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:15:49.125286    4660 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0904 13:15:49.125310    4660 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0904 13:15:49.125327    4660 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0904 13:15:49.125349    4660 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0904 13:15:49.125360    4660 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0904 13:15:49.125387    4660 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0904 13:15:49.125389    4660 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0904 13:15:49.125360    4660 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0904 13:15:49.125396    4660 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0904 13:15:49.125413    4660 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:15:49.125429    4660 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:15:49.125717    4660 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0904 13:15:49.136351    4660 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0904 13:15:49.136483    4660 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0904 13:15:49.147102    4660 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0904 13:15:49.147129    4660 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0904 13:15:49.147231    4660 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0904 13:15:49.151980    4660 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0904 13:15:49.151988    4660 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0904 13:15:49.152002    4660 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0904 13:15:49.152008    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0904 13:15:49.152036    4660 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0904 13:15:49.152041    4660 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0904 13:15:49.152046    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0904 13:15:49.152272    4660 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0904 13:15:49.178363    4660 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0904 13:15:49.193299    4660 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0904 13:15:49.193328    4660 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0904 13:15:49.193391    4660 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0904 13:15:49.253025    4660 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0904 13:15:49.253863    4660 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0904 13:15:49.253871    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0904 13:15:49.373209    4660 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0904 13:15:49.408865    4660 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0904 13:15:49.408973    4660 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:15:49.458815    4660 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0904 13:15:49.458838    4660 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:15:49.458893    4660 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:15:49.464853    4660 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0904 13:15:49.464867    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0904 13:15:49.476912    4660 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0904 13:15:49.477045    4660 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0904 13:15:49.612179    4660 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0904 13:15:49.612218    4660 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0904 13:15:49.612246    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0904 13:15:49.646316    4660 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0904 13:15:49.646330    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0904 13:15:49.882145    4660 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0904 13:15:49.882191    4660 cache_images.go:92] duration metric: took 1.273976667s to LoadCachedImages
	W0904 13:15:49.882230    4660 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0904 13:15:49.882237    4660 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0904 13:15:49.882295    4660 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-175000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-175000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 13:15:49.882376    4660 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0904 13:15:49.896435    4660 cni.go:84] Creating CNI manager for ""
	I0904 13:15:49.896447    4660 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:15:49.896455    4660 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 13:15:49.896465    4660 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-175000 NodeName:stopped-upgrade-175000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 13:15:49.896523    4660 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-175000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 13:15:49.896579    4660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0904 13:15:49.899668    4660 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 13:15:49.899698    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 13:15:49.902495    4660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0904 13:15:49.907509    4660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 13:15:49.912365    4660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0904 13:15:49.917499    4660 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0904 13:15:49.918866    4660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 13:15:49.922480    4660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:15:49.995994    4660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 13:15:50.001903    4660 certs.go:68] Setting up /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000 for IP: 10.0.2.15
	I0904 13:15:50.001911    4660 certs.go:194] generating shared ca certs ...
	I0904 13:15:50.001920    4660 certs.go:226] acquiring lock for ca certs: {Name:mkd62cc1bdffb2500ac7e662aba46cadabbc6839 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:15:50.002111    4660 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.key
	I0904 13:15:50.002163    4660 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/proxy-client-ca.key
	I0904 13:15:50.002171    4660 certs.go:256] generating profile certs ...
	I0904 13:15:50.002255    4660 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/client.key
	I0904 13:15:50.002273    4660 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.key.d48d1bf3
	I0904 13:15:50.002286    4660 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.crt.d48d1bf3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0904 13:15:50.179626    4660 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.crt.d48d1bf3 ...
	I0904 13:15:50.179643    4660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.crt.d48d1bf3: {Name:mkd4e9ea02d9b84638975702181e1980ddc91b6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:15:50.180159    4660 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.key.d48d1bf3 ...
	I0904 13:15:50.180168    4660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.key.d48d1bf3: {Name:mkde62187c9daa95da8033e99db314a77b79f42b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:15:50.180325    4660 certs.go:381] copying /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.crt.d48d1bf3 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.crt
	I0904 13:15:50.180493    4660 certs.go:385] copying /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.key.d48d1bf3 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.key
	I0904 13:15:50.180657    4660 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/proxy-client.key
	I0904 13:15:50.180802    4660 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/1661.pem (1338 bytes)
	W0904 13:15:50.180831    4660 certs.go:480] ignoring /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/1661_empty.pem, impossibly tiny 0 bytes
	I0904 13:15:50.180836    4660 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 13:15:50.180865    4660 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem (1078 bytes)
	I0904 13:15:50.180887    4660 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem (1123 bytes)
	I0904 13:15:50.180910    4660 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/key.pem (1675 bytes)
	I0904 13:15:50.180955    4660 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/files/etc/ssl/certs/16612.pem (1708 bytes)
	I0904 13:15:50.181320    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 13:15:50.188293    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 13:15:50.195474    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 13:15:50.202424    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 13:15:50.209852    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0904 13:15:50.216989    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 13:15:50.223681    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 13:15:50.230508    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 13:15:50.237840    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/files/etc/ssl/certs/16612.pem --> /usr/share/ca-certificates/16612.pem (1708 bytes)
	I0904 13:15:50.244619    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 13:15:50.250919    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/1661.pem --> /usr/share/ca-certificates/1661.pem (1338 bytes)
	I0904 13:15:50.257990    4660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 13:15:50.263033    4660 ssh_runner.go:195] Run: openssl version
	I0904 13:15:50.265004    4660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 13:15:50.268073    4660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 13:15:50.269573    4660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0904 13:15:50.269603    4660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 13:15:50.271686    4660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 13:15:50.275282    4660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1661.pem && ln -fs /usr/share/ca-certificates/1661.pem /etc/ssl/certs/1661.pem"
	I0904 13:15:50.278370    4660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1661.pem
	I0904 13:15:50.279842    4660 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 19:41 /usr/share/ca-certificates/1661.pem
	I0904 13:15:50.279868    4660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1661.pem
	I0904 13:15:50.281568    4660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1661.pem /etc/ssl/certs/51391683.0"
	I0904 13:15:50.284387    4660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16612.pem && ln -fs /usr/share/ca-certificates/16612.pem /etc/ssl/certs/16612.pem"
	I0904 13:15:50.287401    4660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16612.pem
	I0904 13:15:50.289006    4660 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 19:41 /usr/share/ca-certificates/16612.pem
	I0904 13:15:50.289033    4660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16612.pem
	I0904 13:15:50.290714    4660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16612.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 13:15:50.293720    4660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 13:15:50.295052    4660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 13:15:50.296924    4660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 13:15:50.298689    4660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 13:15:50.300471    4660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 13:15:50.302203    4660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 13:15:50.303962    4660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 13:15:50.305690    4660 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-175000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50564 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-175000 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0904 13:15:50.305757    4660 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0904 13:15:50.316041    4660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 13:15:50.319149    4660 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0904 13:15:50.319155    4660 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0904 13:15:50.319174    4660 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0904 13:15:50.322598    4660 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0904 13:15:50.322897    4660 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-175000" does not appear in /Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:15:50.323005    4660 kubeconfig.go:62] /Users/jenkins/minikube-integration/19575-1140/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-175000" cluster setting kubeconfig missing "stopped-upgrade-175000" context setting]
	I0904 13:15:50.323202    4660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/kubeconfig: {Name:mk2a8055a803f1d023c814308503721b85f2130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:15:50.323647    4660 kapi.go:59] client config for stopped-upgrade-175000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/client.key", CAFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10217ff80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 13:15:50.323984    4660 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0904 13:15:50.326766    4660 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-175000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0904 13:15:50.326772    4660 kubeadm.go:1160] stopping kube-system containers ...
	I0904 13:15:50.326815    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0904 13:15:50.337594    4660 docker.go:483] Stopping containers: [6a33a036cd8e cf12e052d1ba b2ede15d553f 05c225f19632 bd580d1877e3 58f0be9a136f d7e09e7da4e6 89d367665f9b]
	I0904 13:15:50.337665    4660 ssh_runner.go:195] Run: docker stop 6a33a036cd8e cf12e052d1ba b2ede15d553f 05c225f19632 bd580d1877e3 58f0be9a136f d7e09e7da4e6 89d367665f9b
	I0904 13:15:50.348401    4660 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0904 13:15:50.354022    4660 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 13:15:50.357199    4660 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 13:15:50.357203    4660 kubeadm.go:157] found existing configuration files:
	
	I0904 13:15:50.357223    4660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/admin.conf
	I0904 13:15:50.359715    4660 kubeadm.go:163] "https://control-plane.minikube.internal:50564" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 13:15:50.359738    4660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 13:15:50.362515    4660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/kubelet.conf
	I0904 13:15:50.365518    4660 kubeadm.go:163] "https://control-plane.minikube.internal:50564" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 13:15:50.365538    4660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 13:15:50.368052    4660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/controller-manager.conf
	I0904 13:15:50.370763    4660 kubeadm.go:163] "https://control-plane.minikube.internal:50564" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 13:15:50.370792    4660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 13:15:50.373913    4660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/scheduler.conf
	I0904 13:15:50.376942    4660 kubeadm.go:163] "https://control-plane.minikube.internal:50564" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 13:15:50.376965    4660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 13:15:50.379427    4660 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 13:15:50.382566    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 13:15:50.405297    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 13:15:50.980414    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0904 13:15:51.106678    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 13:15:51.128256    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0904 13:15:51.152632    4660 api_server.go:52] waiting for apiserver process to appear ...
	I0904 13:15:51.152703    4660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:15:51.007630    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:15:51.654779    4660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:15:52.154783    4660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:15:52.654764    4660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:15:52.659023    4660 api_server.go:72] duration metric: took 1.506412542s to wait for apiserver process to appear ...
	I0904 13:15:52.659046    4660 api_server.go:88] waiting for apiserver healthz status ...
	I0904 13:15:52.659079    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:15:56.009878    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:15:56.010205    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:15:56.044789    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:15:56.044937    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:15:56.065592    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:15:56.065686    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:15:56.080110    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:15:56.080189    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:15:56.091876    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:15:56.091945    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:15:56.102564    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:15:56.102628    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:15:56.116209    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:15:56.116284    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:15:56.127783    4490 logs.go:276] 0 containers: []
	W0904 13:15:56.127796    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:15:56.127858    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:15:56.137927    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:15:56.137945    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:15:56.137950    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:15:56.149794    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:15:56.149804    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:15:56.162906    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:15:56.162920    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:15:56.176358    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:15:56.176371    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:15:56.189542    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:15:56.189554    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:15:56.202460    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:15:56.202475    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:15:56.214840    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:15:56.214852    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:15:56.232537    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:15:56.232549    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:15:56.247026    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:15:56.247036    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:15:56.260136    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:15:56.260149    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:15:56.264280    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:15:56.264290    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:15:56.304101    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:15:56.304114    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:15:56.318265    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:15:56.318277    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:15:56.329692    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:15:56.329707    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:15:56.344636    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:15:56.344648    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:15:56.356309    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:15:56.356322    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:15:56.379937    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:15:56.379966    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:15:58.921908    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:15:57.661137    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:15:57.661197    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:03.924124    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:03.924342    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:16:03.950487    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:16:03.950614    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:16:03.968167    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:16:03.968261    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:16:03.981063    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:16:03.981138    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:16:03.992839    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:16:03.992912    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:16:04.003728    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:16:04.003801    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:16:04.014347    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:16:04.014425    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:16:04.027313    4490 logs.go:276] 0 containers: []
	W0904 13:16:04.027324    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:16:04.027374    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:16:04.039704    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:16:04.039721    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:16:04.039726    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:16:04.053639    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:16:04.053648    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:16:04.074572    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:16:04.074585    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:16:04.085871    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:16:04.085881    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:16:04.097221    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:16:04.097237    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:16:04.137923    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:16:04.137932    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:16:04.142513    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:16:04.142522    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:16:04.154506    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:16:04.154519    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:16:04.166684    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:16:04.166697    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:16:04.189102    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:16:04.189112    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:16:04.199893    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:16:04.199904    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:16:04.211239    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:16:04.211250    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:16:04.222531    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:16:04.222541    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:16:04.257307    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:16:04.257319    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:16:04.269192    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:16:04.269207    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:16:04.284560    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:16:04.284572    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:16:04.307411    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:16:04.307420    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:16:02.661410    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:02.661438    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:06.825469    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:07.661722    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:07.661785    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:11.827662    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:11.827806    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:16:11.840682    4490 logs.go:276] 2 containers: [697e7d2f0666 9446d7ab7b80]
	I0904 13:16:11.840746    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:16:11.851647    4490 logs.go:276] 2 containers: [e62abfdf8147 7b4624ed8253]
	I0904 13:16:11.851718    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:16:11.862537    4490 logs.go:276] 1 containers: [d5c715d390e8]
	I0904 13:16:11.862613    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:16:11.874006    4490 logs.go:276] 2 containers: [8843564dc2e0 d9db85719842]
	I0904 13:16:11.874084    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:16:11.884616    4490 logs.go:276] 1 containers: [a6ed3d241640]
	I0904 13:16:11.884696    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:16:11.895033    4490 logs.go:276] 2 containers: [68013a2f1b65 a5623f257c84]
	I0904 13:16:11.895096    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:16:11.905209    4490 logs.go:276] 0 containers: []
	W0904 13:16:11.905226    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:16:11.905282    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:16:11.915628    4490 logs.go:276] 2 containers: [de4d79e2e8d0 6dcbba0a7395]
	I0904 13:16:11.915648    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:16:11.915653    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:16:11.957172    4490 logs.go:123] Gathering logs for kube-controller-manager [68013a2f1b65] ...
	I0904 13:16:11.957181    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68013a2f1b65"
	I0904 13:16:11.977958    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:16:11.977968    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:16:11.982589    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:16:11.982595    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:16:12.021840    4490 logs.go:123] Gathering logs for kube-scheduler [8843564dc2e0] ...
	I0904 13:16:12.021853    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8843564dc2e0"
	I0904 13:16:12.033339    4490 logs.go:123] Gathering logs for kube-apiserver [697e7d2f0666] ...
	I0904 13:16:12.033349    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697e7d2f0666"
	I0904 13:16:12.050159    4490 logs.go:123] Gathering logs for kube-proxy [a6ed3d241640] ...
	I0904 13:16:12.050170    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6ed3d241640"
	I0904 13:16:12.062416    4490 logs.go:123] Gathering logs for storage-provisioner [de4d79e2e8d0] ...
	I0904 13:16:12.062427    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4d79e2e8d0"
	I0904 13:16:12.080281    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:16:12.080290    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:16:12.102647    4490 logs.go:123] Gathering logs for storage-provisioner [6dcbba0a7395] ...
	I0904 13:16:12.102657    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcbba0a7395"
	I0904 13:16:12.113954    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:16:12.113965    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:16:12.125840    4490 logs.go:123] Gathering logs for kube-apiserver [9446d7ab7b80] ...
	I0904 13:16:12.125851    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9446d7ab7b80"
	I0904 13:16:12.137119    4490 logs.go:123] Gathering logs for etcd [e62abfdf8147] ...
	I0904 13:16:12.137134    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e62abfdf8147"
	I0904 13:16:12.151600    4490 logs.go:123] Gathering logs for etcd [7b4624ed8253] ...
	I0904 13:16:12.151613    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4624ed8253"
	I0904 13:16:12.164558    4490 logs.go:123] Gathering logs for coredns [d5c715d390e8] ...
	I0904 13:16:12.164569    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c715d390e8"
	I0904 13:16:12.176432    4490 logs.go:123] Gathering logs for kube-scheduler [d9db85719842] ...
	I0904 13:16:12.176447    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9db85719842"
	I0904 13:16:12.187612    4490 logs.go:123] Gathering logs for kube-controller-manager [a5623f257c84] ...
	I0904 13:16:12.187626    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5623f257c84"
	I0904 13:16:14.699341    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:12.662278    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:12.662300    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:19.701583    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:19.701666    4490 kubeadm.go:597] duration metric: took 4m4.762362208s to restartPrimaryControlPlane
	W0904 13:16:19.701752    4490 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0904 13:16:19.701785    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0904 13:16:20.677189    4490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 13:16:20.682367    4490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 13:16:20.685268    4490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 13:16:20.688357    4490 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 13:16:20.688363    4490 kubeadm.go:157] found existing configuration files:
	
	I0904 13:16:20.688386    4490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/admin.conf
	I0904 13:16:20.690919    4490 kubeadm.go:163] "https://control-plane.minikube.internal:50315" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 13:16:20.690946    4490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 13:16:20.693674    4490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/kubelet.conf
	I0904 13:16:20.696953    4490 kubeadm.go:163] "https://control-plane.minikube.internal:50315" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 13:16:20.696973    4490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 13:16:20.699915    4490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/controller-manager.conf
	I0904 13:16:20.702285    4490 kubeadm.go:163] "https://control-plane.minikube.internal:50315" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 13:16:20.702307    4490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 13:16:20.705293    4490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/scheduler.conf
	I0904 13:16:20.708304    4490 kubeadm.go:163] "https://control-plane.minikube.internal:50315" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50315 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 13:16:20.708328    4490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 13:16:20.711063    4490 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0904 13:16:20.728571    4490 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0904 13:16:20.728771    4490 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 13:16:20.799659    4490 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 13:16:20.799725    4490 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 13:16:20.799791    4490 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0904 13:16:20.849622    4490 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 13:16:20.853632    4490 out.go:235]   - Generating certificates and keys ...
	I0904 13:16:20.853665    4490 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 13:16:20.853698    4490 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 13:16:20.853735    4490 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0904 13:16:20.853763    4490 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0904 13:16:20.853801    4490 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0904 13:16:20.853838    4490 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0904 13:16:20.853877    4490 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0904 13:16:20.853909    4490 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0904 13:16:20.853949    4490 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0904 13:16:20.853988    4490 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0904 13:16:20.854007    4490 kubeadm.go:310] [certs] Using the existing "sa" key
	I0904 13:16:20.854032    4490 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 13:16:20.918226    4490 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 13:16:20.975827    4490 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 13:16:21.106365    4490 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 13:16:21.162358    4490 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 13:16:21.198194    4490 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 13:16:21.198381    4490 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 13:16:21.198475    4490 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 13:16:21.289942    4490 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 13:16:17.662797    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:17.662829    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:21.291628    4490 out.go:235]   - Booting up control plane ...
	I0904 13:16:21.291672    4490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 13:16:21.291712    4490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 13:16:21.291750    4490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 13:16:21.291822    4490 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 13:16:21.291922    4490 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0904 13:16:22.663532    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:22.663558    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:25.794742    4490 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503557 seconds
	I0904 13:16:25.794834    4490 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 13:16:25.799555    4490 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 13:16:26.314859    4490 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 13:16:26.315260    4490 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-478000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 13:16:26.819887    4490 kubeadm.go:310] [bootstrap-token] Using token: 7qgyum.uup81ppvceosqebq
	I0904 13:16:26.826405    4490 out.go:235]   - Configuring RBAC rules ...
	I0904 13:16:26.826467    4490 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 13:16:26.826520    4490 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 13:16:26.828551    4490 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 13:16:26.830441    4490 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 13:16:26.831451    4490 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 13:16:26.832285    4490 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 13:16:26.835701    4490 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 13:16:27.011341    4490 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 13:16:27.224213    4490 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 13:16:27.224649    4490 kubeadm.go:310] 
	I0904 13:16:27.224685    4490 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 13:16:27.224692    4490 kubeadm.go:310] 
	I0904 13:16:27.224740    4490 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 13:16:27.224745    4490 kubeadm.go:310] 
	I0904 13:16:27.224757    4490 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 13:16:27.224793    4490 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 13:16:27.224822    4490 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 13:16:27.224827    4490 kubeadm.go:310] 
	I0904 13:16:27.224856    4490 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 13:16:27.224861    4490 kubeadm.go:310] 
	I0904 13:16:27.224885    4490 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 13:16:27.224888    4490 kubeadm.go:310] 
	I0904 13:16:27.224924    4490 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 13:16:27.224971    4490 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 13:16:27.225007    4490 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 13:16:27.225010    4490 kubeadm.go:310] 
	I0904 13:16:27.225054    4490 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 13:16:27.225098    4490 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 13:16:27.225103    4490 kubeadm.go:310] 
	I0904 13:16:27.225148    4490 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7qgyum.uup81ppvceosqebq \
	I0904 13:16:27.225205    4490 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3feb851b3bc39caa9868530b83b064422b69401534f2eff748003ac6b1086498 \
	I0904 13:16:27.225217    4490 kubeadm.go:310] 	--control-plane 
	I0904 13:16:27.225221    4490 kubeadm.go:310] 
	I0904 13:16:27.225265    4490 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 13:16:27.225272    4490 kubeadm.go:310] 
	I0904 13:16:27.225310    4490 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7qgyum.uup81ppvceosqebq \
	I0904 13:16:27.225365    4490 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3feb851b3bc39caa9868530b83b064422b69401534f2eff748003ac6b1086498 
	I0904 13:16:27.225427    4490 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 13:16:27.225433    4490 cni.go:84] Creating CNI manager for ""
	I0904 13:16:27.225440    4490 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:16:27.229937    4490 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0904 13:16:27.236886    4490 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0904 13:16:27.240151    4490 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0904 13:16:27.245244    4490 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 13:16:27.245295    4490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 13:16:27.245325    4490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-478000 minikube.k8s.io/updated_at=2024_09_04T13_16_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af minikube.k8s.io/name=running-upgrade-478000 minikube.k8s.io/primary=true
	I0904 13:16:27.248390    4490 ops.go:34] apiserver oom_adj: -16
	I0904 13:16:27.291760    4490 kubeadm.go:1113] duration metric: took 46.500625ms to wait for elevateKubeSystemPrivileges
	I0904 13:16:27.291778    4490 kubeadm.go:394] duration metric: took 4m12.432398417s to StartCluster
	I0904 13:16:27.291788    4490 settings.go:142] acquiring lock: {Name:mk9e5d70c30d2e6b96e7a9eeb7ab14f5f9a1127e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:16:27.291885    4490 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:16:27.292293    4490 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/kubeconfig: {Name:mk2a8055a803f1d023c814308503721b85f2130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:16:27.292490    4490 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:16:27.292502    4490 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 13:16:27.292576    4490 config.go:182] Loaded profile config "running-upgrade-478000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:16:27.292590    4490 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-478000"
	I0904 13:16:27.292590    4490 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-478000"
	I0904 13:16:27.292605    4490 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-478000"
	I0904 13:16:27.292615    4490 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-478000"
	W0904 13:16:27.292619    4490 addons.go:243] addon storage-provisioner should already be in state true
	I0904 13:16:27.292630    4490 host.go:66] Checking if "running-upgrade-478000" exists ...
	I0904 13:16:27.300853    4490 out.go:177] * Verifying Kubernetes components...
	I0904 13:16:27.301055    4490 kapi.go:59] client config for running-upgrade-478000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/running-upgrade-478000/client.key", CAFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104957f80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 13:16:27.301404    4490 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-478000"
	W0904 13:16:27.301411    4490 addons.go:243] addon default-storageclass should already be in state true
	I0904 13:16:27.301421    4490 host.go:66] Checking if "running-upgrade-478000" exists ...
	I0904 13:16:27.302230    4490 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 13:16:27.304743    4490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 13:16:27.304751    4490 sshutil.go:53] new ssh client: &{IP:localhost Port:50283 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/running-upgrade-478000/id_rsa Username:docker}
	I0904 13:16:27.304931    4490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:16:27.310941    4490 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:16:27.312378    4490 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 13:16:27.312384    4490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 13:16:27.312394    4490 sshutil.go:53] new ssh client: &{IP:localhost Port:50283 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/running-upgrade-478000/id_rsa Username:docker}
	I0904 13:16:27.398875    4490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 13:16:27.403833    4490 api_server.go:52] waiting for apiserver process to appear ...
	I0904 13:16:27.403873    4490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:16:27.407856    4490 api_server.go:72] duration metric: took 115.35675ms to wait for apiserver process to appear ...
	I0904 13:16:27.407864    4490 api_server.go:88] waiting for apiserver healthz status ...
	I0904 13:16:27.407871    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:27.436722    4490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 13:16:27.439583    4490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 13:16:27.764533    4490 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0904 13:16:27.764545    4490 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0904 13:16:27.664433    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:27.664458    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:32.410073    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:32.410168    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:32.665668    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:32.665740    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:37.411393    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:37.411436    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:37.667473    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:37.667512    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:42.412097    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:42.412141    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:42.668700    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:42.668719    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:47.413440    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:47.413493    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:47.670897    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:47.670968    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:52.414358    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:52.414455    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:52.673372    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:52.673623    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:16:52.692689    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:16:52.692816    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:16:52.713995    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:16:52.714088    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:16:52.725283    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:16:52.725378    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:16:52.735998    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:16:52.736080    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:16:52.746446    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:16:52.746514    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:16:52.756767    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:16:52.756846    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:16:52.768169    4660 logs.go:276] 0 containers: []
	W0904 13:16:52.768180    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:16:52.768250    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:16:52.778065    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:16:52.778092    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:16:52.778097    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:16:52.794252    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:16:52.794263    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:16:52.806015    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:16:52.806025    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:16:52.831974    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:16:52.831987    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:16:52.836275    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:16:52.836282    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:16:52.911575    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:16:52.911587    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:16:52.952136    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:16:52.952145    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:16:52.970513    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:16:52.970523    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:16:52.985957    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:16:52.985973    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:16:52.998083    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:16:52.998094    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:16:53.011348    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:16:53.011359    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:16:53.028659    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:16:53.028669    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:16:53.040419    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:16:53.040429    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:16:53.051262    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:16:53.051275    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:16:53.089159    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:16:53.089185    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:16:53.103875    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:16:53.103885    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:16:53.121560    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:16:53.121574    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:16:55.635998    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:57.415989    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:57.416032    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0904 13:16:57.766394    4490 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0904 13:16:57.770935    4490 out.go:177] * Enabled addons: storage-provisioner
	I0904 13:16:57.779916    4490 addons.go:510] duration metric: took 30.487932375s for enable addons: enabled=[storage-provisioner]
	I0904 13:17:00.638744    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:00.639435    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:00.676510    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:17:00.676646    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:00.695534    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:17:00.695641    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:00.709317    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:17:00.709399    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:00.721556    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:17:00.721628    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:00.732215    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:17:00.732285    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:00.743303    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:17:00.743374    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:00.759036    4660 logs.go:276] 0 containers: []
	W0904 13:17:00.759046    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:00.759101    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:00.773455    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:17:00.773474    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:17:00.773480    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:17:00.784646    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:00.784656    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:00.811120    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:00.811130    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:00.850418    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:17:00.850427    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:17:00.865773    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:17:00.865783    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:17:00.881325    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:17:00.881336    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:17:00.899027    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:17:00.899039    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:17:00.911656    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:17:00.911666    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:17:00.922805    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:17:00.922817    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:17:00.940402    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:17:00.940413    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:17:00.979655    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:17:00.979668    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:17:00.991161    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:17:00.991175    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:17:01.002828    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:01.002837    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:01.006881    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:01.006888    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:01.043505    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:17:01.043517    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:01.055419    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:17:01.055430    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:17:01.070456    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:17:01.070467    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:17:02.418094    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:02.418166    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:03.585234    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:07.420553    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:07.420598    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:08.587554    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:08.587758    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:08.613237    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:17:08.613334    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:08.632371    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:17:08.632449    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:08.644856    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:17:08.644920    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:08.660465    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:17:08.660535    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:08.670757    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:17:08.670815    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:08.681559    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:17:08.681618    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:08.691648    4660 logs.go:276] 0 containers: []
	W0904 13:17:08.691660    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:08.691716    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:08.704026    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:17:08.704044    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:17:08.704050    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:17:08.716252    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:17:08.716263    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:17:08.727457    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:17:08.727468    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:08.739067    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:17:08.739079    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:17:08.754855    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:17:08.754868    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:17:08.766359    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:17:08.766370    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:17:08.784513    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:08.784525    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:08.808534    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:08.808542    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:08.844903    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:08.844915    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:08.849129    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:17:08.849136    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:17:08.861136    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:17:08.861156    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:17:08.874986    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:17:08.875002    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:17:08.914676    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:17:08.914690    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:17:08.926512    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:17:08.926523    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:17:08.944199    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:17:08.944210    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:17:08.955835    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:08.955849    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:08.991335    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:17:08.991347    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:17:12.422259    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:12.422337    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:11.507461    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:17.424758    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:17.424804    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:16.508330    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:16.508479    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:16.525115    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:17:16.525204    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:16.538153    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:17:16.538224    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:16.549732    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:17:16.549807    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:16.560288    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:17:16.560361    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:16.570931    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:17:16.571001    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:16.581493    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:17:16.581556    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:16.592528    4660 logs.go:276] 0 containers: []
	W0904 13:17:16.592540    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:16.592599    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:16.602911    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:17:16.602929    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:17:16.602934    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:17:16.620279    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:17:16.620289    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:17:16.635826    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:17:16.635839    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:17:16.649854    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:17:16.649864    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:17:16.663467    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:17:16.663478    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:17:16.675379    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:17:16.675389    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:17:16.687266    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:17:16.687276    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:16.699025    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:17:16.699037    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:17:16.740394    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:17:16.740411    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:17:16.752335    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:17:16.752348    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:17:16.764178    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:16.764190    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:16.800881    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:16.800892    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:16.805122    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:16.805129    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:16.841794    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:17:16.841804    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:17:16.857283    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:17:16.857294    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:17:16.875003    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:17:16.875016    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:17:16.887450    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:16.887460    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:19.413294    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:22.427131    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:22.427209    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:24.415862    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:24.416039    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:24.431838    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:17:24.431909    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:24.442520    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:17:24.442591    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:24.460909    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:17:24.460984    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:24.472171    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:17:24.472246    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:24.482581    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:17:24.482645    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:24.493711    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:17:24.493792    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:24.504580    4660 logs.go:276] 0 containers: []
	W0904 13:17:24.504592    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:24.504658    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:24.519873    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:17:24.519891    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:24.519897    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:24.557189    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:17:24.557198    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:17:24.571866    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:17:24.571878    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:24.584199    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:17:24.584209    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:17:24.622328    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:17:24.622341    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:17:24.634618    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:17:24.634633    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:17:24.647233    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:24.647244    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:24.671689    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:17:24.671699    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:17:24.689677    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:17:24.689688    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:17:24.701779    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:17:24.701791    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:17:24.713121    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:24.713135    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:24.717897    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:24.717906    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:24.759922    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:17:24.759936    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:17:24.774453    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:17:24.774467    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:17:24.789738    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:17:24.789751    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:17:24.805582    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:17:24.805591    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:17:24.820390    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:17:24.820400    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:17:27.429708    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:27.429870    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:27.453012    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:17:27.453114    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:27.469840    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:17:27.469919    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:27.483061    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:17:27.483131    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:27.494432    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:17:27.494507    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:27.513461    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:17:27.513539    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:27.530230    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:17:27.530304    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:27.542099    4490 logs.go:276] 0 containers: []
	W0904 13:17:27.542110    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:27.542172    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:27.553008    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:17:27.553021    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:17:27.553027    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:17:27.571412    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:17:27.571424    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:17:27.587670    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:27.587680    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:27.611200    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:17:27.611210    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:17:27.623082    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:17:27.623097    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:17:27.638399    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:27.638412    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:27.674224    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:17:27.674235    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:17:27.688976    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:17:27.688988    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:17:27.703176    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:17:27.703189    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:17:27.714946    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:17:27.714957    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:17:27.726843    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:17:27.726854    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:27.740042    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:27.740053    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:27.779856    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:27.779869    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:30.286781    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:27.334117    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:35.289043    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:35.289202    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:35.307716    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:17:35.307803    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:35.327031    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:17:35.327095    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:35.343439    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:17:35.343502    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:35.355158    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:17:35.355232    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:35.366352    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:17:35.366420    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:35.376905    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:17:35.376968    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:35.391228    4490 logs.go:276] 0 containers: []
	W0904 13:17:35.391239    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:35.391298    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:35.401853    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:17:35.401870    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:17:35.401875    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:17:35.416680    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:17:35.416693    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:17:35.431080    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:17:35.431094    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:17:35.452250    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:17:35.452262    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:17:35.465057    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:35.465070    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:35.490987    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:17:35.490999    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:35.502425    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:35.502436    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:35.541625    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:35.541636    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:35.545890    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:35.545899    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:35.582281    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:17:35.582295    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:17:35.594081    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:17:35.594092    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:17:35.606476    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:17:35.606488    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:17:35.624703    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:17:35.624714    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:17:32.336437    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:32.336747    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:32.371141    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:17:32.371266    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:32.393287    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:17:32.393404    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:32.408500    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:17:32.408573    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:32.421131    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:17:32.421207    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:32.432385    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:17:32.432457    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:32.443280    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:17:32.443342    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:32.453679    4660 logs.go:276] 0 containers: []
	W0904 13:17:32.453688    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:32.453738    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:32.464700    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:17:32.464724    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:17:32.464730    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:17:32.476986    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:17:32.476998    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:17:32.501630    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:17:32.501644    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:17:32.514510    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:17:32.514521    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:17:32.529835    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:17:32.529845    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:17:32.577540    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:17:32.577550    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:17:32.592708    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:17:32.592723    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:17:32.603636    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:17:32.603649    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:32.615203    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:32.615217    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:32.654063    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:32.654073    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:32.658817    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:32.658824    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:32.693218    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:17:32.693229    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:17:32.707643    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:17:32.707656    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:17:32.722154    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:17:32.722163    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:17:32.741873    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:17:32.741885    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:17:32.752888    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:17:32.752902    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:17:32.764694    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:32.764707    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:35.289033    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:38.138254    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:40.291370    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:40.291523    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:40.311584    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:17:40.311685    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:40.326381    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:17:40.326460    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:40.338416    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:17:40.338490    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:40.349385    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:17:40.349455    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:40.359548    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:17:40.359618    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:40.374420    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:17:40.374490    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:40.384871    4660 logs.go:276] 0 containers: []
	W0904 13:17:40.384882    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:40.384941    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:40.395681    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:17:40.395698    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:17:40.395704    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:40.407741    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:40.407755    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:40.443109    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:40.443120    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:40.466705    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:17:40.466713    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:17:40.484447    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:17:40.484459    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:17:40.496058    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:17:40.496069    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:17:40.507656    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:17:40.507666    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:17:40.518755    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:17:40.518766    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:17:40.531752    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:40.531762    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:40.535897    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:17:40.535903    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:17:40.554565    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:17:40.554576    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:17:40.593007    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:17:40.593021    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:17:40.607191    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:17:40.607209    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:17:40.618724    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:17:40.618737    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:17:40.636296    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:17:40.636311    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:17:40.647740    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:40.647752    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:40.685626    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:17:40.685636    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:17:43.140281    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:43.140547    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:43.161870    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:17:43.161985    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:43.177648    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:17:43.177720    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:43.190406    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:17:43.190473    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:43.201611    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:17:43.201677    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:43.212174    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:17:43.212247    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:43.222861    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:17:43.222919    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:43.233401    4490 logs.go:276] 0 containers: []
	W0904 13:17:43.233412    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:43.233468    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:43.243901    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:17:43.243917    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:17:43.243923    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:17:43.255736    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:17:43.255746    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:17:43.266808    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:17:43.266818    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:17:43.278236    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:17:43.278246    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:17:43.294081    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:17:43.294091    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:17:43.307819    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:17:43.307829    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:17:43.322522    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:17:43.322532    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:17:43.334175    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:17:43.334185    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:17:43.353670    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:43.353680    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:43.390847    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:43.390857    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:43.394927    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:43.394936    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:43.428291    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:43.428302    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:43.451695    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:17:43.451705    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:43.202578    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:45.965703    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:48.203929    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:48.204171    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:48.223989    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:17:48.224092    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:48.237843    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:17:48.237921    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:48.252112    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:17:48.252180    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:48.262248    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:17:48.262320    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:48.273088    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:17:48.273155    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:48.284260    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:17:48.284332    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:48.293818    4660 logs.go:276] 0 containers: []
	W0904 13:17:48.293828    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:48.293887    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:48.305034    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:17:48.305052    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:48.305059    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:48.342325    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:17:48.342342    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:17:48.353499    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:17:48.353513    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:17:48.365070    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:17:48.365080    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:17:48.381184    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:17:48.381194    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:17:48.400084    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:17:48.400096    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:17:48.411609    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:48.411624    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:48.415715    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:48.415720    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:48.478359    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:17:48.478374    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:17:48.493397    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:17:48.493408    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:17:48.507566    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:17:48.507579    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:48.519360    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:17:48.519374    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:17:48.557966    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:17:48.557982    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:17:48.570322    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:17:48.570334    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:17:48.584795    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:17:48.584808    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:17:48.597136    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:17:48.597146    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:17:48.608588    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:48.608598    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:51.136420    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:50.968272    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:50.968617    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:51.007064    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:17:51.007201    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:51.033636    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:17:51.033738    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:51.048933    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:17:51.049007    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:51.060780    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:17:51.060854    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:51.072415    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:17:51.072478    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:51.083094    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:17:51.083166    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:51.093701    4490 logs.go:276] 0 containers: []
	W0904 13:17:51.093713    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:51.093776    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:51.104759    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:17:51.104779    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:17:51.104785    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:17:51.115875    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:17:51.115889    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:17:51.130579    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:17:51.130590    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:17:51.152295    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:51.152306    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:51.177744    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:51.177753    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:51.182519    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:17:51.182528    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:17:51.197117    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:17:51.197130    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:17:51.212214    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:17:51.212223    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:17:51.224221    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:17:51.224232    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:17:51.236246    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:17:51.236256    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:17:51.248292    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:17:51.248302    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:51.259886    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:51.259897    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:51.299524    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:51.299536    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:53.841808    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:56.138522    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:56.138651    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:56.150428    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:17:56.150535    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:56.161565    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:17:56.161633    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:56.172002    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:17:56.172073    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:56.186789    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:17:56.186864    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:56.198227    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:17:56.198296    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:56.209284    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:17:56.209352    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:56.219977    4660 logs.go:276] 0 containers: []
	W0904 13:17:56.219988    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:56.220051    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:56.230340    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:17:56.230360    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:56.230366    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:56.234691    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:17:56.234700    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:17:56.248313    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:17:56.248323    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:17:56.260194    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:17:56.260205    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:17:56.277533    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:56.277543    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:56.314536    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:17:56.314545    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:17:56.328634    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:17:56.328644    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:17:56.343426    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:17:56.343436    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:17:56.355240    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:17:56.355250    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:17:56.366503    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:56.366516    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:56.391769    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:56.391784    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:56.426776    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:17:56.426790    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:17:58.844112    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:58.844490    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:58.883725    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:17:58.883854    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:58.902739    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:17:58.902824    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:58.918032    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:17:58.918112    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:58.929758    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:17:58.929829    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:58.940119    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:17:58.940185    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:58.950891    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:17:58.950958    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:58.960982    4490 logs.go:276] 0 containers: []
	W0904 13:17:58.960994    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:58.961057    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:58.974567    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:17:58.974586    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:17:58.974592    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:17:58.986291    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:17:58.986303    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:58.997820    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:17:58.997833    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:17:59.009289    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:17:59.009303    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:17:59.024042    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:17:59.024055    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:17:59.035622    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:17:59.035634    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:17:59.050451    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:17:59.050464    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:17:59.064517    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:17:59.064532    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:17:59.079110    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:17:59.079124    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:17:59.096329    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:59.096339    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:59.120510    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:59.120519    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:59.158219    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:59.158232    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:59.162532    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:59.162542    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:56.465856    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:17:56.465867    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:56.477974    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:17:56.477987    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:17:56.494231    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:17:56.494244    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:17:56.505842    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:17:56.505853    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:17:56.519769    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:17:56.519784    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:17:59.032675    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:01.700394    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:04.034125    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:04.034415    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:04.062604    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:18:04.062736    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:04.080501    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:18:04.080592    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:04.093917    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:18:04.094009    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:04.105765    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:18:04.105839    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:04.116319    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:18:04.116391    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:04.141219    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:18:04.141298    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:04.151650    4660 logs.go:276] 0 containers: []
	W0904 13:18:04.151662    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:04.151721    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:04.163717    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:18:04.163739    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:04.163744    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:04.200394    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:18:04.200405    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:18:04.211772    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:18:04.211784    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:18:04.227051    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:18:04.227064    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:18:04.239013    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:18:04.239028    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:18:04.250192    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:18:04.250202    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:18:04.265109    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:04.265121    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:04.269364    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:18:04.269373    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:18:04.283141    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:18:04.283152    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:18:04.296928    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:18:04.296942    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:18:04.309010    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:04.309022    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:04.334633    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:18:04.334640    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:04.346270    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:04.346285    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:04.382392    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:18:04.382403    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:18:04.397484    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:18:04.397497    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:18:04.416785    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:18:04.416796    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:18:04.454647    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:18:04.454661    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:18:06.702594    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:06.702814    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:06.718738    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:18:06.718818    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:06.733122    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:18:06.733193    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:06.743754    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:18:06.743814    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:06.753950    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:18:06.754010    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:06.764900    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:18:06.764969    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:06.775793    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:18:06.775853    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:06.785852    4490 logs.go:276] 0 containers: []
	W0904 13:18:06.785861    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:06.785909    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:06.796474    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:18:06.796491    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:18:06.796498    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:18:06.810684    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:18:06.810698    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:18:06.821883    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:18:06.821893    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:18:06.836327    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:18:06.836338    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:18:06.854820    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:18:06.854835    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:18:06.869748    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:18:06.869762    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:06.881572    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:06.881585    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:06.918825    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:06.918835    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:06.954206    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:18:06.954218    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:18:06.969117    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:18:06.969132    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:18:06.981129    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:18:06.981139    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:18:06.993253    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:06.993264    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:07.016506    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:07.016514    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:09.521766    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:06.974742    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:14.524129    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:14.524371    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:14.542848    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:18:14.542926    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:14.554950    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:18:14.555024    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:14.565519    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:18:14.565588    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:14.576268    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:18:14.576341    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:14.590458    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:18:14.590529    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:14.600789    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:18:14.600856    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:14.611682    4490 logs.go:276] 0 containers: []
	W0904 13:18:14.611692    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:14.611754    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:14.622988    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:18:14.623002    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:14.623007    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:14.660139    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:18:14.660153    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:18:14.675399    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:18:14.675410    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:18:14.687030    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:18:14.687040    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:18:14.703817    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:14.703828    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:14.743235    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:14.743249    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:14.747721    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:18:14.747730    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:18:14.761404    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:18:14.761417    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:18:14.773293    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:18:14.773308    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:18:14.784954    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:18:14.784965    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:18:14.799947    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:18:14.799957    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:18:14.811376    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:14.811390    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:14.836885    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:18:14.836895    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:11.976902    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:11.977143    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:12.002710    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:18:12.002834    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:12.018745    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:18:12.018826    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:12.038957    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:18:12.039026    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:12.049696    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:18:12.049771    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:12.060939    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:18:12.061004    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:12.071531    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:18:12.071603    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:12.081664    4660 logs.go:276] 0 containers: []
	W0904 13:18:12.081675    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:12.081730    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:12.092612    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:18:12.092628    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:12.092633    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:12.132062    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:18:12.132077    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:18:12.146193    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:18:12.146206    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:18:12.157686    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:12.157697    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:12.183607    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:18:12.183625    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:18:12.195404    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:18:12.195417    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:18:12.207651    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:18:12.207667    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:18:12.223981    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:12.223993    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:12.228422    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:18:12.228429    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:18:12.267311    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:18:12.267329    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:18:12.281456    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:18:12.281484    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:18:12.296608    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:18:12.296622    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:18:12.313798    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:18:12.313810    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:18:12.325385    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:18:12.325398    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:12.337504    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:12.337515    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:12.373455    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:18:12.373470    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:18:12.388693    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:18:12.388705    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:18:14.900813    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:17.350332    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:19.903063    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:19.903307    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:19.928501    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:18:19.928612    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:19.945621    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:18:19.945701    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:19.960081    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:18:19.960163    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:19.971368    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:18:19.971433    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:19.981663    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:18:19.981741    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:19.992376    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:18:19.992447    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:20.002633    4660 logs.go:276] 0 containers: []
	W0904 13:18:20.002643    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:20.002702    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:20.013358    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:18:20.013376    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:20.013382    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:20.049659    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:18:20.049677    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:18:20.064248    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:18:20.064262    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:18:20.075794    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:18:20.075812    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:18:20.093212    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:18:20.093227    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:18:20.105331    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:20.105343    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:20.110064    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:18:20.110074    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:18:20.124553    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:18:20.124563    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:20.137110    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:18:20.137121    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:18:20.152307    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:18:20.152318    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:18:20.167614    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:18:20.167624    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:18:20.182923    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:18:20.182935    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:18:20.198549    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:20.198558    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:20.224102    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:20.224124    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:20.263724    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:18:20.263744    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:18:20.311865    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:18:20.311878    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:18:20.324299    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:18:20.324310    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:18:22.352674    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:22.352884    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:22.370232    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:18:22.370325    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:22.383890    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:18:22.383965    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:22.395400    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:18:22.395470    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:22.405536    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:18:22.405607    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:22.416641    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:18:22.416712    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:22.427206    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:18:22.427273    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:22.437204    4490 logs.go:276] 0 containers: []
	W0904 13:18:22.437214    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:22.437268    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:22.454467    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:18:22.454484    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:22.454489    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:22.492274    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:22.492285    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:22.496771    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:18:22.496779    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:18:22.508938    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:18:22.508954    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:18:22.526537    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:18:22.526551    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:18:22.538512    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:22.538527    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:22.562052    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:22.562063    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:22.597562    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:18:22.597574    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:18:22.612324    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:18:22.612337    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:18:22.626026    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:18:22.626040    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:18:22.637999    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:18:22.638011    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:18:22.650179    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:18:22.650192    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:18:22.668112    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:18:22.668123    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:25.181644    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:22.837874    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:30.183816    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:30.183959    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:30.195123    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:18:30.195197    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:30.205785    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:18:30.205842    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:30.216252    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:18:30.216328    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:30.227060    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:18:30.227125    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:30.237192    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:18:30.237264    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:30.250355    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:18:30.250420    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:30.260768    4490 logs.go:276] 0 containers: []
	W0904 13:18:30.260779    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:30.260832    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:30.271238    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:18:30.271252    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:18:30.271257    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:18:30.289836    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:18:30.289847    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:18:30.302917    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:18:30.302932    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:18:30.317488    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:18:30.317501    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:18:30.329639    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:18:30.329651    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:30.341310    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:30.341325    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:30.364329    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:30.364337    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:30.401393    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:30.401402    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:30.405690    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:30.405699    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:30.441007    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:18:30.441018    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:18:30.454629    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:18:30.454642    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:18:30.466173    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:18:30.466186    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:18:30.479077    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:18:30.479091    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:18:27.840101    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:27.840298    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:27.861084    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:18:27.861175    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:27.876319    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:18:27.876398    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:27.888892    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:18:27.888971    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:27.901903    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:18:27.901982    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:27.914122    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:18:27.914196    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:27.924864    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:18:27.924935    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:27.935166    4660 logs.go:276] 0 containers: []
	W0904 13:18:27.935177    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:27.935236    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:27.945754    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:18:27.945773    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:18:27.945778    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:18:27.957440    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:27.957451    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:27.980800    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:18:27.980808    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:27.993019    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:27.993029    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:27.997652    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:18:27.997659    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:18:28.015806    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:18:28.015817    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:18:28.027574    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:18:28.027584    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:18:28.039672    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:18:28.039683    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:18:28.050798    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:18:28.050809    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:18:28.064801    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:18:28.064812    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:18:28.076611    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:28.076625    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:28.115230    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:18:28.115240    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:18:28.130231    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:18:28.130240    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:18:28.168085    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:18:28.168096    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:18:28.182392    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:18:28.182401    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:18:28.194174    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:28.194184    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:28.230204    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:18:28.230216    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:18:30.745145    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:33.002107    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:35.747447    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:35.747789    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:35.779562    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:18:35.779685    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:35.798908    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:18:35.799004    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:35.813148    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:18:35.813227    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:35.825567    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:18:35.825642    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:35.836935    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:18:35.837001    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:35.847831    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:18:35.847904    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:35.858325    4660 logs.go:276] 0 containers: []
	W0904 13:18:35.858337    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:35.858397    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:35.869662    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:18:35.869679    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:18:35.869685    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:18:35.884176    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:18:35.884186    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:18:35.900275    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:18:35.900287    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:18:35.911709    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:18:35.911719    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:18:35.923407    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:35.923422    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:35.961277    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:35.961285    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:35.966010    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:35.966018    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:36.001331    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:18:36.001341    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:18:36.015383    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:18:36.015394    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:18:36.028485    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:18:36.028495    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:18:36.039746    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:18:36.039757    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:18:36.054604    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:18:36.054619    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:18:36.070321    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:18:36.070331    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:18:36.108473    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:18:36.108484    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:18:36.123466    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:18:36.123478    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:18:36.140817    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:36.140828    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:36.164230    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:18:36.164236    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:38.002671    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:38.002990    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:38.031515    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:18:38.031620    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:38.049052    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:18:38.049135    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:38.066312    4490 logs.go:276] 2 containers: [c2e4fd07d881 083b85426991]
	I0904 13:18:38.066385    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:38.076891    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:18:38.076964    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:38.087956    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:18:38.088034    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:38.098788    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:18:38.098862    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:38.109105    4490 logs.go:276] 0 containers: []
	W0904 13:18:38.109116    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:38.109177    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:38.121137    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:18:38.121152    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:38.121157    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:38.158197    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:18:38.158206    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:18:38.173314    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:18:38.173326    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:18:38.185325    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:38.185339    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:38.189644    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:38.189654    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:38.224049    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:18:38.224062    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:18:38.239070    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:18:38.239079    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:18:38.253254    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:18:38.253268    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:18:38.266085    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:18:38.266094    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:18:38.280242    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:18:38.280254    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:18:38.299068    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:18:38.299078    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:18:38.310523    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:38.310533    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:38.333978    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:18:38.333991    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:38.679571    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:40.848502    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:43.681776    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:43.682122    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:43.713207    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:18:43.713348    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:43.733206    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:18:43.733315    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:43.747647    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:18:43.747713    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:43.759295    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:18:43.759368    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:43.769755    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:18:43.769828    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:43.780637    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:18:43.780707    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:43.791339    4660 logs.go:276] 0 containers: []
	W0904 13:18:43.791350    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:43.791408    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:43.802694    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:18:43.802713    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:18:43.802720    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:18:43.818194    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:18:43.818205    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:18:43.833700    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:18:43.833709    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:18:43.851671    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:43.851682    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:43.874727    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:43.874740    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:43.878986    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:18:43.878994    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:18:43.892684    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:18:43.892693    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:18:43.907385    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:18:43.907395    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:18:43.921343    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:43.921354    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:43.957510    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:18:43.957520    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:18:43.969314    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:18:43.969324    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:18:43.981297    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:18:43.981308    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:18:43.992875    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:18:43.992887    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:44.004901    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:44.004912    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:44.041751    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:18:44.041761    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:18:44.083191    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:18:44.083201    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:18:44.094940    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:18:44.094954    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:18:45.850699    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:45.850888    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:45.870402    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:18:45.870500    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:45.884581    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:18:45.884663    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:45.898657    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:18:45.898731    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:45.909492    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:18:45.909567    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:45.919931    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:18:45.919996    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:45.930503    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:18:45.930578    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:45.940703    4490 logs.go:276] 0 containers: []
	W0904 13:18:45.940714    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:45.940770    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:45.951305    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:18:45.951324    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:18:45.951330    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:18:45.962425    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:18:45.962437    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:18:45.974353    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:18:45.974365    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:18:45.989104    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:18:45.989114    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:18:46.001297    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:46.001308    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:46.026883    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:18:46.026893    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:18:46.041267    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:18:46.041280    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:18:46.054599    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:18:46.054612    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:18:46.073279    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:18:46.073290    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:18:46.084485    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:46.084498    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:46.124307    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:18:46.124317    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:18:46.135671    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:18:46.135684    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:46.151099    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:46.151109    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:46.187034    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:18:46.187047    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:18:46.199274    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:46.199289    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:48.706017    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:46.608286    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:53.708428    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:53.708768    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:53.741688    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:18:53.741825    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:53.759028    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:18:53.759144    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:53.772537    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:18:53.772627    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:53.784381    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:18:53.784455    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:53.794891    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:18:53.794975    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:53.805853    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:18:53.805927    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:53.816382    4490 logs.go:276] 0 containers: []
	W0904 13:18:53.816394    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:53.816467    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:53.827865    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:18:53.827882    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:53.827887    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:53.864934    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:18:53.864945    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:18:53.879945    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:18:53.879954    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:18:53.891435    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:18:53.891446    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:18:53.902208    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:18:53.902218    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:18:53.913659    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:18:53.913669    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:18:53.925829    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:18:53.925837    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:18:53.940429    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:18:53.940442    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:18:53.953134    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:18:53.953148    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:18:53.967685    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:18:53.967696    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:53.979693    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:53.979704    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:54.017607    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:54.017616    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:54.022211    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:18:54.022218    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:18:54.033551    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:18:54.033562    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:18:54.050867    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:54.050878    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:51.610503    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:51.610633    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:51.623632    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:18:51.623707    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:51.638782    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:18:51.638854    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:51.648777    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:18:51.648840    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:51.658927    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:18:51.658997    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:51.669519    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:18:51.669583    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:51.680251    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:18:51.680323    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:51.690238    4660 logs.go:276] 0 containers: []
	W0904 13:18:51.690250    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:51.690312    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:51.701744    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:18:51.701763    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:51.701769    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:51.740434    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:18:51.740444    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:18:51.758681    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:18:51.758691    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:51.771321    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:18:51.771333    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:18:51.785645    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:18:51.785655    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:18:51.824033    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:18:51.824058    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:18:51.835657    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:18:51.835667    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:18:51.847870    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:51.847883    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:51.852020    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:18:51.852026    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:18:51.868351    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:18:51.868368    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:18:51.884433    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:18:51.884450    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:18:51.900057    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:51.900067    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:51.935933    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:18:51.935947    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:18:51.951404    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:18:51.951417    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:18:51.962333    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:18:51.962346    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:18:51.976732    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:18:51.976743    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:18:51.988018    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:51.988032    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:54.513928    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:56.575134    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:59.516220    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:59.516471    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:59.532830    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:18:59.532927    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:59.546189    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:18:59.546257    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:59.557166    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:18:59.557240    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:59.571322    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:18:59.571390    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:59.582046    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:18:59.582112    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:59.593194    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:18:59.593261    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:59.603671    4660 logs.go:276] 0 containers: []
	W0904 13:18:59.603689    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:59.603749    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:59.614508    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:18:59.614526    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:18:59.614531    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:18:59.632121    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:18:59.632133    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:18:59.643401    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:59.643415    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:59.678328    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:18:59.678342    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:18:59.692584    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:18:59.692594    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:18:59.732153    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:18:59.732165    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:18:59.744206    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:18:59.744216    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:18:59.756163    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:18:59.756174    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:18:59.769298    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:18:59.769309    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:18:59.784920    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:59.784932    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:59.823050    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:18:59.823059    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:18:59.837670    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:18:59.837679    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:18:59.848960    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:18:59.848973    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:18:59.863629    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:18:59.863640    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:59.875520    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:59.875532    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:59.879498    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:18:59.879505    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:18:59.894130    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:59.894140    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:01.577318    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:01.577430    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:01.589062    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:19:01.589141    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:01.599913    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:19:01.599978    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:01.610697    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:19:01.610767    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:01.621139    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:19:01.621209    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:01.632180    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:19:01.632254    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:01.646239    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:19:01.646305    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:01.656525    4490 logs.go:276] 0 containers: []
	W0904 13:19:01.656535    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:01.656590    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:01.667075    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:19:01.667092    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:19:01.667098    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:19:01.678749    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:19:01.678758    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:19:01.689909    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:19:01.689922    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:19:01.706218    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:01.706228    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:01.743629    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:19:01.743642    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:19:01.754975    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:19:01.754990    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:19:01.767416    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:19:01.767428    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:19:01.782183    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:01.782196    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:01.786484    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:19:01.786493    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:19:01.798003    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:19:01.798014    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:19:01.809967    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:19:01.809981    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:19:01.827936    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:01.827947    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:01.853184    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:19:01.853193    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:19:01.867863    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:19:01.867879    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:01.879400    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:01.879414    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:04.418632    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:02.420695    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:09.420992    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:09.421350    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:09.462354    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:19:09.462499    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:09.484513    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:19:09.484597    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:09.507649    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:19:09.507720    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:09.519426    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:19:09.519500    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:09.530087    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:19:09.530156    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:09.544699    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:19:09.544772    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:09.555307    4490 logs.go:276] 0 containers: []
	W0904 13:19:09.555327    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:09.555393    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:09.565998    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:19:09.566017    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:09.566022    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:09.601548    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:19:09.601560    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:19:09.613509    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:19:09.613522    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:19:09.625486    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:19:09.625496    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:19:09.639297    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:19:09.639311    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:19:09.651077    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:19:09.651088    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:19:09.669026    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:09.669039    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:09.692501    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:19:09.692513    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:19:09.707157    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:19:09.707171    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:19:09.722760    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:19:09.722771    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:19:09.734461    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:19:09.734472    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:09.746718    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:09.746732    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:09.785361    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:09.785368    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:09.790081    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:19:09.790092    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:19:09.801904    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:19:09.801915    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:19:07.422971    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:07.423112    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:07.436869    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:19:07.436940    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:07.451553    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:19:07.451625    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:07.461939    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:19:07.462016    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:07.472865    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:19:07.472938    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:07.483609    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:19:07.483682    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:07.494626    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:19:07.494696    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:07.505053    4660 logs.go:276] 0 containers: []
	W0904 13:19:07.505064    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:07.505122    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:07.524026    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:19:07.524043    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:07.524050    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:07.528393    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:19:07.528400    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:19:07.565483    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:19:07.565496    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:19:07.577287    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:19:07.577301    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:19:07.595955    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:19:07.595966    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:19:07.608317    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:07.608328    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:07.645033    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:19:07.645047    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:19:07.659289    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:19:07.659299    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:19:07.673406    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:19:07.673416    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:19:07.684237    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:07.684249    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:07.708681    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:19:07.708691    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:07.720953    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:19:07.720967    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:19:07.735747    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:19:07.735757    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:19:07.752994    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:07.753005    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:07.790552    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:19:07.790561    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:19:07.804409    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:19:07.804419    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:19:07.815992    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:19:07.816006    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:19:10.328788    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:12.320243    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:15.330063    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:15.330295    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:15.350291    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:19:15.350381    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:15.364972    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:19:15.365046    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:15.377488    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:19:15.377561    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:15.387984    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:19:15.388056    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:15.398334    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:19:15.398395    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:15.410117    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:19:15.410177    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:15.420598    4660 logs.go:276] 0 containers: []
	W0904 13:19:15.420609    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:15.420669    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:15.431245    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:19:15.431260    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:19:15.431265    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:19:15.447556    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:19:15.447570    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:19:15.459407    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:19:15.459417    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:19:15.471742    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:19:15.471753    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:15.483686    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:15.483700    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:15.521929    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:19:15.521939    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:19:15.542912    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:19:15.542925    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:19:15.582289    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:19:15.582300    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:19:15.597224    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:19:15.597235    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:19:15.614755    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:19:15.614767    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:19:15.626759    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:15.626772    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:15.631898    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:19:15.631907    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:19:15.656649    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:19:15.656666    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:19:15.670207    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:19:15.670221    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:19:15.684178    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:19:15.684194    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:19:15.695289    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:15.695299    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:15.717345    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:15.717353    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:17.322559    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:17.322804    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:17.344448    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:19:17.344536    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:17.356642    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:19:17.356709    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:17.367608    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:19:17.367680    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:17.377813    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:19:17.377885    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:17.388632    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:19:17.388695    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:17.399317    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:19:17.399394    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:17.409308    4490 logs.go:276] 0 containers: []
	W0904 13:19:17.409321    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:17.409378    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:17.419770    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:19:17.419786    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:17.419792    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:17.424904    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:19:17.424910    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:19:17.439069    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:19:17.439080    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:19:17.450396    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:17.450404    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:17.473901    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:19:17.473910    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:17.485263    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:19:17.485275    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:19:17.499565    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:19:17.499577    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:19:17.510705    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:19:17.510719    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:19:17.523213    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:17.523226    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:17.562836    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:19:17.562848    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:19:17.589384    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:19:17.589394    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:19:17.603578    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:19:17.603588    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:19:17.622174    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:17.622185    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:17.657112    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:19:17.657124    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:19:17.669104    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:19:17.669115    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:19:20.183228    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:18.255778    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:25.184032    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:25.184328    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:25.216718    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:19:25.216844    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:25.236452    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:19:25.236550    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:25.250735    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:19:25.250815    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:25.262470    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:19:25.262538    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:25.273267    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:19:25.273338    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:25.284326    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:19:25.284396    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:25.294483    4490 logs.go:276] 0 containers: []
	W0904 13:19:25.294494    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:25.294553    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:25.305290    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:19:25.305306    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:19:25.305312    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:19:25.316550    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:25.316564    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:25.340505    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:25.340516    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:25.377924    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:19:25.377933    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:19:25.398291    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:19:25.398304    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:19:25.424307    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:19:25.424317    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:19:25.444936    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:25.444945    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:25.481045    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:19:25.481056    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:19:25.493179    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:19:25.493193    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:19:25.507320    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:19:25.507334    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:19:25.522452    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:19:25.522465    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:19:25.534058    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:19:25.534069    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:19:25.545993    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:19:25.546004    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:25.558239    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:25.558250    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:25.562709    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:19:25.562720    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:19:23.258124    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:23.258419    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:23.287767    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:19:23.287891    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:23.305230    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:19:23.305320    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:23.318784    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:19:23.318860    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:23.331409    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:19:23.331475    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:23.342078    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:19:23.342150    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:23.352777    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:19:23.352852    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:23.362973    4660 logs.go:276] 0 containers: []
	W0904 13:19:23.362983    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:23.363041    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:23.373387    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:19:23.373405    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:19:23.373410    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:23.385231    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:23.385243    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:23.423956    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:19:23.423965    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:19:23.437876    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:19:23.437887    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:19:23.449878    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:19:23.449888    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:19:23.467347    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:19:23.467357    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:19:23.478760    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:23.478770    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:23.514099    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:19:23.514110    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:19:23.552687    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:19:23.552699    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:19:23.567342    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:19:23.567352    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:19:23.579043    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:19:23.579055    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:19:23.590364    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:23.590374    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:23.594561    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:19:23.594571    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:19:23.617005    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:19:23.617016    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:19:23.631410    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:23.631423    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:23.654210    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:19:23.654221    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:19:23.668140    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:19:23.668152    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:19:26.181459    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:28.078828    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:31.184124    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:31.184405    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:31.219641    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:19:31.219787    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:31.238011    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:19:31.238116    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:31.251511    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:19:31.251608    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:31.267302    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:19:31.267382    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:31.277814    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:19:31.277881    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:31.293276    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:19:31.293349    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:31.304537    4660 logs.go:276] 0 containers: []
	W0904 13:19:31.304549    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:31.304610    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:31.315718    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:19:31.315735    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:19:31.315741    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:19:31.353642    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:19:31.353655    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:19:31.368477    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:19:31.368489    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:19:31.383318    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:19:31.383333    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:31.394698    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:31.394712    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:31.432644    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:31.432657    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:33.081132    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:33.081493    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:33.106381    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:19:33.106489    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:33.124474    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:19:33.124554    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:33.137687    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:19:33.137762    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:33.149337    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:19:33.149401    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:33.160071    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:19:33.160129    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:33.170170    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:19:33.170232    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:33.180325    4490 logs.go:276] 0 containers: []
	W0904 13:19:33.180337    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:33.180395    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:33.190684    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:19:33.190698    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:33.190703    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:33.228730    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:19:33.228743    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:19:33.243543    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:19:33.243552    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:19:33.258036    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:19:33.258050    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:19:33.269448    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:19:33.269463    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:33.281060    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:33.281075    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:33.305585    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:33.305592    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:33.309653    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:33.309660    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:33.345209    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:19:33.345220    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:19:33.357347    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:19:33.357357    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:19:33.372390    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:19:33.372404    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:19:33.384189    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:19:33.384200    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:19:33.401763    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:19:33.401773    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:19:33.413594    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:19:33.413606    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:19:33.427347    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:19:33.427359    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:19:31.468558    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:19:31.468570    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:19:31.488456    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:31.488466    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:31.513204    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:19:31.513214    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:19:31.534208    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:19:31.534218    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:19:31.545997    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:19:31.546008    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:19:31.558275    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:19:31.558286    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:19:31.573071    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:19:31.573084    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:19:31.585387    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:19:31.585399    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:19:31.604135    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:31.604147    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:31.608823    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:19:31.608831    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:19:31.623263    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:19:31.623273    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:19:34.137548    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:35.943673    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:39.140033    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:39.140276    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:39.162623    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:19:39.162724    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:39.178558    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:19:39.178641    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:39.192659    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:19:39.192733    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:39.211288    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:19:39.211356    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:39.222379    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:19:39.222453    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:39.233053    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:19:39.233127    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:39.243304    4660 logs.go:276] 0 containers: []
	W0904 13:19:39.243314    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:39.243372    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:39.255266    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:19:39.255284    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:39.255291    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:39.259387    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:39.259396    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:39.295553    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:19:39.295563    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:19:39.307561    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:19:39.307572    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:19:39.323111    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:19:39.323124    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:19:39.336048    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:39.336059    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:39.374850    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:19:39.374859    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:19:39.422266    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:19:39.422277    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:19:39.436402    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:19:39.436412    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:19:39.447276    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:19:39.447288    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:19:39.465005    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:39.465016    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:39.488225    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:19:39.488237    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:19:39.508659    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:19:39.508671    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:19:39.520205    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:19:39.520216    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:39.532553    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:19:39.532565    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:19:39.546722    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:19:39.546732    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:19:39.558930    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:19:39.558941    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:19:40.946022    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:40.946213    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:40.960192    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:19:40.960282    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:40.971827    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:19:40.971899    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:40.983003    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:19:40.983073    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:40.994038    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:19:40.994120    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:41.004530    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:19:41.004599    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:41.015722    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:19:41.015790    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:41.025609    4490 logs.go:276] 0 containers: []
	W0904 13:19:41.025620    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:41.025675    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:41.035615    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:19:41.035633    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:19:41.035638    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:19:41.047164    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:19:41.047174    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:19:41.058651    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:41.058662    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:41.095455    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:19:41.095465    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:19:41.109503    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:19:41.109513    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:19:41.121656    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:19:41.121669    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:19:41.133505    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:19:41.133516    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:19:41.151000    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:19:41.151010    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:41.162879    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:41.162889    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:41.200653    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:19:41.200666    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:19:41.220438    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:41.220452    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:41.225117    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:19:41.225126    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:19:41.240308    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:19:41.240316    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:19:41.251845    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:19:41.251857    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:19:41.270993    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:41.271004    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:43.797014    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:42.072384    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:48.799492    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:48.799631    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:48.815587    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:19:48.815671    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:48.828301    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:19:48.828374    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:48.839426    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:19:48.839503    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:48.852761    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:19:48.852833    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:48.864341    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:19:48.864408    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:48.875083    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:19:48.875157    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:48.885797    4490 logs.go:276] 0 containers: []
	W0904 13:19:48.885810    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:48.885866    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:48.896054    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:19:48.896070    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:48.896075    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:48.935117    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:48.935125    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:48.939357    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:19:48.939366    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:19:48.953900    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:19:48.953911    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:19:48.971766    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:19:48.971775    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:48.984116    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:19:48.984125    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:19:48.998529    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:19:48.998537    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:19:49.012370    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:19:49.012380    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:19:49.024021    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:19:49.024030    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:19:49.035939    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:49.035949    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:49.075720    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:19:49.075732    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:19:49.087560    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:19:49.087570    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:19:49.099754    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:19:49.099764    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:19:49.111365    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:19:49.111376    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:19:49.122789    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:49.122799    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:47.074656    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:47.074744    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:47.085482    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:19:47.085545    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:47.098088    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:19:47.098153    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:47.108898    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:19:47.108970    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:47.119275    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:19:47.119343    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:47.129720    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:19:47.129781    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:47.140501    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:19:47.140562    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:47.150272    4660 logs.go:276] 0 containers: []
	W0904 13:19:47.150289    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:47.150352    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:47.161386    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:19:47.161410    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:19:47.161416    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:19:47.177192    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:19:47.177203    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:19:47.188827    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:19:47.188839    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:19:47.207895    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:47.207906    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:47.212365    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:19:47.212371    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:19:47.249501    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:19:47.249511    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:19:47.264334    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:47.264345    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:47.288037    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:47.288047    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:47.326786    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:47.326798    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:47.362420    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:19:47.362432    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:19:47.377334    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:19:47.377344    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:19:47.389575    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:19:47.389587    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:19:47.403997    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:19:47.404010    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:19:47.417306    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:19:47.417316    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:19:47.431769    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:19:47.431779    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:19:47.444404    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:19:47.444414    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:19:47.455875    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:19:47.455887    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:49.970090    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:51.648488    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:54.972285    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:54.972341    4660 kubeadm.go:597] duration metric: took 4m4.657293625s to restartPrimaryControlPlane
	W0904 13:19:54.972401    4660 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0904 13:19:54.972427    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0904 13:19:55.973814    4660 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.001386458s)
	I0904 13:19:55.973892    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 13:19:55.978998    4660 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 13:19:55.981833    4660 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 13:19:55.984880    4660 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 13:19:55.984887    4660 kubeadm.go:157] found existing configuration files:
	
	I0904 13:19:55.984911    4660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/admin.conf
	I0904 13:19:55.987907    4660 kubeadm.go:163] "https://control-plane.minikube.internal:50564" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 13:19:55.987929    4660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 13:19:55.990708    4660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/kubelet.conf
	I0904 13:19:55.993415    4660 kubeadm.go:163] "https://control-plane.minikube.internal:50564" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 13:19:55.993441    4660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 13:19:55.996807    4660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/controller-manager.conf
	I0904 13:19:55.999854    4660 kubeadm.go:163] "https://control-plane.minikube.internal:50564" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 13:19:55.999879    4660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 13:19:56.002472    4660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/scheduler.conf
	I0904 13:19:56.005111    4660 kubeadm.go:163] "https://control-plane.minikube.internal:50564" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 13:19:56.005137    4660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 13:19:56.008356    4660 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0904 13:19:56.029219    4660 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0904 13:19:56.029281    4660 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 13:19:56.088411    4660 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 13:19:56.088470    4660 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 13:19:56.088527    4660 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0904 13:19:56.141359    4660 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 13:19:56.149599    4660 out.go:235]   - Generating certificates and keys ...
	I0904 13:19:56.149634    4660 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 13:19:56.149663    4660 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 13:19:56.149701    4660 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0904 13:19:56.149735    4660 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0904 13:19:56.149774    4660 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0904 13:19:56.149805    4660 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0904 13:19:56.149833    4660 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0904 13:19:56.149861    4660 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0904 13:19:56.149905    4660 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0904 13:19:56.149960    4660 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0904 13:19:56.149980    4660 kubeadm.go:310] [certs] Using the existing "sa" key
	I0904 13:19:56.150012    4660 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 13:19:56.272152    4660 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 13:19:56.320860    4660 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 13:19:56.515671    4660 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 13:19:56.764096    4660 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 13:19:56.793089    4660 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 13:19:56.793744    4660 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 13:19:56.793941    4660 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 13:19:56.865170    4660 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 13:19:56.650688    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:56.650809    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:56.661893    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:19:56.661970    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:56.672816    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:19:56.672895    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:56.683999    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:19:56.684076    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:56.695455    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:19:56.695520    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:56.706402    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:19:56.706473    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:56.717531    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:19:56.717594    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:56.728211    4490 logs.go:276] 0 containers: []
	W0904 13:19:56.728223    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:56.728285    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:56.739375    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:19:56.739392    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:19:56.739396    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:19:56.751848    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:19:56.751863    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:19:56.764089    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:56.764102    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:56.790349    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:19:56.790366    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:19:56.803033    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:19:56.803047    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:19:56.819898    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:19:56.819910    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:56.833049    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:56.833060    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:56.871507    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:19:56.871517    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:19:56.887074    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:19:56.887086    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:19:56.901797    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:56.901808    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:56.906572    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:19:56.906581    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:19:56.921581    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:19:56.921592    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:19:56.937345    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:56.937356    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:56.976571    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:19:56.976582    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:19:56.988896    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:19:56.988906    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:19:59.509052    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:56.868357    4660 out.go:235]   - Booting up control plane ...
	I0904 13:19:56.868469    4660 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 13:19:56.868508    4660 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 13:19:56.868594    4660 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 13:19:56.868744    4660 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 13:19:56.869214    4660 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0904 13:20:00.871351    4660 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002552 seconds
	I0904 13:20:00.871470    4660 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 13:20:00.875268    4660 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 13:20:01.389303    4660 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 13:20:01.389548    4660 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-175000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 13:20:01.895585    4660 kubeadm.go:310] [bootstrap-token] Using token: e43l1m.2immplqdgm4q9v3p
	I0904 13:20:01.901862    4660 out.go:235]   - Configuring RBAC rules ...
	I0904 13:20:01.901924    4660 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 13:20:01.901973    4660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 13:20:01.907071    4660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 13:20:01.907842    4660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 13:20:01.908635    4660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 13:20:01.909470    4660 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 13:20:01.912673    4660 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 13:20:02.079797    4660 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 13:20:02.299813    4660 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 13:20:02.300272    4660 kubeadm.go:310] 
	I0904 13:20:02.300303    4660 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 13:20:02.300306    4660 kubeadm.go:310] 
	I0904 13:20:02.300348    4660 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 13:20:02.300351    4660 kubeadm.go:310] 
	I0904 13:20:02.300375    4660 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 13:20:02.300408    4660 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 13:20:02.300436    4660 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 13:20:02.300440    4660 kubeadm.go:310] 
	I0904 13:20:02.300464    4660 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 13:20:02.300469    4660 kubeadm.go:310] 
	I0904 13:20:02.300493    4660 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 13:20:02.300496    4660 kubeadm.go:310] 
	I0904 13:20:02.300531    4660 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 13:20:02.300574    4660 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 13:20:02.300618    4660 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 13:20:02.300623    4660 kubeadm.go:310] 
	I0904 13:20:02.300667    4660 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 13:20:02.300710    4660 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 13:20:02.300713    4660 kubeadm.go:310] 
	I0904 13:20:02.300771    4660 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e43l1m.2immplqdgm4q9v3p \
	I0904 13:20:02.300823    4660 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3feb851b3bc39caa9868530b83b064422b69401534f2eff748003ac6b1086498 \
	I0904 13:20:02.300834    4660 kubeadm.go:310] 	--control-plane 
	I0904 13:20:02.300838    4660 kubeadm.go:310] 
	I0904 13:20:02.300879    4660 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 13:20:02.300884    4660 kubeadm.go:310] 
	I0904 13:20:02.300925    4660 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e43l1m.2immplqdgm4q9v3p \
	I0904 13:20:02.301016    4660 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3feb851b3bc39caa9868530b83b064422b69401534f2eff748003ac6b1086498 
	I0904 13:20:02.301082    4660 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 13:20:02.301108    4660 cni.go:84] Creating CNI manager for ""
	I0904 13:20:02.301118    4660 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:20:02.304829    4660 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0904 13:20:02.312768    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0904 13:20:02.316172    4660 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0904 13:20:02.321378    4660 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 13:20:02.321460    4660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-175000 minikube.k8s.io/updated_at=2024_09_04T13_20_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af minikube.k8s.io/name=stopped-upgrade-175000 minikube.k8s.io/primary=true
	I0904 13:20:02.321460    4660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 13:20:02.334183    4660 ops.go:34] apiserver oom_adj: -16
	I0904 13:20:02.365257    4660 kubeadm.go:1113] duration metric: took 43.856541ms to wait for elevateKubeSystemPrivileges
	I0904 13:20:02.365269    4660 kubeadm.go:394] duration metric: took 4m12.063818708s to StartCluster
	I0904 13:20:02.365280    4660 settings.go:142] acquiring lock: {Name:mk9e5d70c30d2e6b96e7a9eeb7ab14f5f9a1127e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:20:02.365369    4660 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:20:02.365802    4660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/kubeconfig: {Name:mk2a8055a803f1d023c814308503721b85f2130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:20:02.366005    4660 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:20:02.366038    4660 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 13:20:02.366077    4660 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-175000"
	I0904 13:20:02.366090    4660 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-175000"
	W0904 13:20:02.366094    4660 addons.go:243] addon storage-provisioner should already be in state true
	I0904 13:20:02.366094    4660 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-175000"
	I0904 13:20:02.366107    4660 host.go:66] Checking if "stopped-upgrade-175000" exists ...
	I0904 13:20:02.366109    4660 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-175000"
	I0904 13:20:02.366128    4660 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:20:02.366505    4660 retry.go:31] will retry after 648.253957ms: connect: dial unix /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/monitor: connect: connection refused
	I0904 13:20:02.367235    4660 kapi.go:59] client config for stopped-upgrade-175000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/client.key", CAFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10217ff80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 13:20:02.367351    4660 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-175000"
	W0904 13:20:02.367356    4660 addons.go:243] addon default-storageclass should already be in state true
	I0904 13:20:02.367368    4660 host.go:66] Checking if "stopped-upgrade-175000" exists ...
	I0904 13:20:02.367881    4660 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 13:20:02.367886    4660 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 13:20:02.367891    4660 sshutil.go:53] new ssh client: &{IP:localhost Port:50529 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/id_rsa Username:docker}
	I0904 13:20:02.369770    4660 out.go:177] * Verifying Kubernetes components...
	I0904 13:20:02.376750    4660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:20:02.452948    4660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 13:20:02.458313    4660 api_server.go:52] waiting for apiserver process to appear ...
	I0904 13:20:02.458347    4660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:20:02.462245    4660 api_server.go:72] duration metric: took 96.231208ms to wait for apiserver process to appear ...
	I0904 13:20:02.462252    4660 api_server.go:88] waiting for apiserver healthz status ...
	I0904 13:20:02.462258    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:02.520369    4660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 13:20:02.840626    4660 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0904 13:20:02.840639    4660 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0904 13:20:03.021622    4660 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:20:04.511190    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:04.511315    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:20:04.522285    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:20:04.522365    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:20:04.533295    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:20:04.533362    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:20:04.543782    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:20:04.543859    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:20:04.554637    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:20:04.554709    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:20:04.565323    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:20:04.565394    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:20:04.575866    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:20:04.575932    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:20:04.585837    4490 logs.go:276] 0 containers: []
	W0904 13:20:04.585848    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:20:04.585907    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:20:04.596134    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:20:04.596150    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:20:04.596155    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:20:04.619227    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:20:04.619238    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:20:04.633204    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:20:04.633215    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:20:04.670895    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:20:04.670909    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:20:04.681499    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:20:04.681517    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:20:04.715830    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:20:04.715844    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:20:04.730759    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:20:04.730773    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:20:04.752222    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:20:04.752233    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:20:04.764030    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:20:04.764043    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:20:04.775727    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:20:04.775736    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:20:04.787789    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:20:04.787808    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:20:04.799346    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:20:04.799359    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:20:04.811218    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:20:04.811228    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:20:04.848790    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:20:04.848803    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:20:04.866892    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:20:04.866904    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:20:03.025596    4660 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 13:20:03.025603    4660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 13:20:03.025612    4660 sshutil.go:53] new ssh client: &{IP:localhost Port:50529 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/id_rsa Username:docker}
	I0904 13:20:03.061941    4660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 13:20:07.380679    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:07.464329    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:07.464357    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:12.382849    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:12.382984    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:20:12.396075    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:20:12.396153    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:20:12.408229    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:20:12.408300    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:20:12.418549    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:20:12.418622    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:20:12.428752    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:20:12.428820    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:20:12.439088    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:20:12.439172    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:20:12.450489    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:20:12.450560    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:20:12.461888    4490 logs.go:276] 0 containers: []
	W0904 13:20:12.461900    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:20:12.461964    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:20:12.473650    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:20:12.473668    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:20:12.473674    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:20:12.487550    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:20:12.487563    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:20:12.499748    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:20:12.499758    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:20:12.511774    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:20:12.511785    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:20:12.529575    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:20:12.529586    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:20:12.542052    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:20:12.542063    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:20:12.554786    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:20:12.554797    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:20:12.566420    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:20:12.566430    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:20:12.581004    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:20:12.581014    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:20:12.606379    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:20:12.606392    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:20:12.643592    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:20:12.643611    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:20:12.680468    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:20:12.680482    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:20:12.692555    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:20:12.692569    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:20:12.696986    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:20:12.696993    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:20:12.711580    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:20:12.711593    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:20:15.225159    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:12.464983    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:12.465003    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:20.227313    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:20.227447    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:20:20.241412    4490 logs.go:276] 1 containers: [fee74e624df2]
	I0904 13:20:20.241494    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:20:20.252734    4490 logs.go:276] 1 containers: [7a3c15652139]
	I0904 13:20:20.252803    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:20:20.263098    4490 logs.go:276] 4 containers: [f549e8e25192 064346220ce8 c2e4fd07d881 083b85426991]
	I0904 13:20:20.263172    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:20:20.274483    4490 logs.go:276] 1 containers: [6e32ad2a10f6]
	I0904 13:20:20.274568    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:20:20.284887    4490 logs.go:276] 1 containers: [7a5f2394b31f]
	I0904 13:20:20.284960    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:20:20.295400    4490 logs.go:276] 1 containers: [c8e8e98b2e71]
	I0904 13:20:20.295473    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:20:20.305760    4490 logs.go:276] 0 containers: []
	W0904 13:20:20.305771    4490 logs.go:278] No container was found matching "kindnet"
	I0904 13:20:20.305823    4490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:20:20.316097    4490 logs.go:276] 1 containers: [9b415d028298]
	I0904 13:20:20.316113    4490 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:20:20.316119    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:20:20.354107    4490 logs.go:123] Gathering logs for coredns [f549e8e25192] ...
	I0904 13:20:20.354120    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f549e8e25192"
	I0904 13:20:20.365488    4490 logs.go:123] Gathering logs for coredns [064346220ce8] ...
	I0904 13:20:20.365498    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064346220ce8"
	I0904 13:20:20.377054    4490 logs.go:123] Gathering logs for etcd [7a3c15652139] ...
	I0904 13:20:20.377066    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a3c15652139"
	I0904 13:20:20.390413    4490 logs.go:123] Gathering logs for kube-proxy [7a5f2394b31f] ...
	I0904 13:20:20.390426    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a5f2394b31f"
	I0904 13:20:20.402259    4490 logs.go:123] Gathering logs for kube-controller-manager [c8e8e98b2e71] ...
	I0904 13:20:20.402269    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e8e98b2e71"
	I0904 13:20:20.419503    4490 logs.go:123] Gathering logs for storage-provisioner [9b415d028298] ...
	I0904 13:20:20.419513    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b415d028298"
	I0904 13:20:20.431396    4490 logs.go:123] Gathering logs for container status ...
	I0904 13:20:20.431408    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:20:20.444279    4490 logs.go:123] Gathering logs for kube-apiserver [fee74e624df2] ...
	I0904 13:20:20.444291    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fee74e624df2"
	I0904 13:20:20.460471    4490 logs.go:123] Gathering logs for coredns [c2e4fd07d881] ...
	I0904 13:20:20.460485    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2e4fd07d881"
	I0904 13:20:20.471960    4490 logs.go:123] Gathering logs for coredns [083b85426991] ...
	I0904 13:20:20.471975    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083b85426991"
	I0904 13:20:20.483309    4490 logs.go:123] Gathering logs for kube-scheduler [6e32ad2a10f6] ...
	I0904 13:20:20.483320    4490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e32ad2a10f6"
	I0904 13:20:20.497897    4490 logs.go:123] Gathering logs for Docker ...
	I0904 13:20:20.497907    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:20:20.520323    4490 logs.go:123] Gathering logs for kubelet ...
	I0904 13:20:20.520332    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:20:20.557464    4490 logs.go:123] Gathering logs for dmesg ...
	I0904 13:20:20.557472    4490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:20:17.465332    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:17.465387    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:23.063485    4490 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:22.465878    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:22.465942    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:28.065625    4490 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:28.071208    4490 out.go:201] 
	W0904 13:20:28.075118    4490 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0904 13:20:28.075129    4490 out.go:270] * 
	W0904 13:20:28.076039    4490 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:20:28.087069    4490 out.go:201] 
	I0904 13:20:27.466618    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:27.466669    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:32.467531    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:32.467578    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0904 13:20:32.842147    4660 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0904 13:20:32.845402    4660 out.go:177] * Enabled addons: storage-provisioner
	I0904 13:20:32.854404    4660 addons.go:510] duration metric: took 30.488889083s for enable addons: enabled=[storage-provisioner]
	I0904 13:20:37.468688    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:37.468739    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-09-04 20:11:40 UTC, ends at Wed 2024-09-04 20:20:44 UTC. --
	Sep 04 20:20:28 running-upgrade-478000 dockerd[2873]: time="2024-09-04T20:20:28.777465479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 04 20:20:28 running-upgrade-478000 dockerd[2873]: time="2024-09-04T20:20:28.777510477Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d047ed2773c30d80c05ca021e6c7d3f335e941ac450a9073dd0023bb8dacbbe9 pid=18958 runtime=io.containerd.runc.v2
	Sep 04 20:20:28 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:28Z" level=error msg="ContainerStats resp: {0x40008f9ac0 linux}"
	Sep 04 20:20:28 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:28Z" level=error msg="ContainerStats resp: {0x4000832700 linux}"
	Sep 04 20:20:29 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:29Z" level=error msg="ContainerStats resp: {0x40006e3780 linux}"
	Sep 04 20:20:29 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:29Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 04 20:20:30 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:30Z" level=error msg="ContainerStats resp: {0x400097e600 linux}"
	Sep 04 20:20:30 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:30Z" level=error msg="ContainerStats resp: {0x4000321dc0 linux}"
	Sep 04 20:20:30 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:30Z" level=error msg="ContainerStats resp: {0x40006024c0 linux}"
	Sep 04 20:20:30 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:30Z" level=error msg="ContainerStats resp: {0x4000602f00 linux}"
	Sep 04 20:20:30 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:30Z" level=error msg="ContainerStats resp: {0x400097f080 linux}"
	Sep 04 20:20:30 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:30Z" level=error msg="ContainerStats resp: {0x400097f340 linux}"
	Sep 04 20:20:30 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:30Z" level=error msg="ContainerStats resp: {0x400047c880 linux}"
	Sep 04 20:20:34 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:34Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 04 20:20:39 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:39Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 04 20:20:40 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:40Z" level=error msg="ContainerStats resp: {0x4000965240 linux}"
	Sep 04 20:20:40 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:40Z" level=error msg="ContainerStats resp: {0x4000603800 linux}"
	Sep 04 20:20:41 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:41Z" level=error msg="ContainerStats resp: {0x4000321600 linux}"
	Sep 04 20:20:42 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:42Z" level=error msg="ContainerStats resp: {0x40006e2600 linux}"
	Sep 04 20:20:42 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:42Z" level=error msg="ContainerStats resp: {0x4000089e80 linux}"
	Sep 04 20:20:42 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:42Z" level=error msg="ContainerStats resp: {0x400009d540 linux}"
	Sep 04 20:20:42 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:42Z" level=error msg="ContainerStats resp: {0x400009df00 linux}"
	Sep 04 20:20:42 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:42Z" level=error msg="ContainerStats resp: {0x40006e3900 linux}"
	Sep 04 20:20:42 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:42Z" level=error msg="ContainerStats resp: {0x40006e3d80 linux}"
	Sep 04 20:20:42 running-upgrade-478000 cri-dockerd[2711]: time="2024-09-04T20:20:42Z" level=error msg="ContainerStats resp: {0x40004a9540 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	d047ed2773c30       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   b171d35ca2b83
	b7ffe68840354       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   090ea07e54634
	f549e8e25192e       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   090ea07e54634
	064346220ce8c       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   b171d35ca2b83
	9b415d0282988       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   8c66216198d4a
	7a5f2394b31f9       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   1f84b8131fc57
	6e32ad2a10f6c       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   abdc034f3ee66
	c8e8e98b2e711       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   06d5182af7810
	fee74e624df2b       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   d5ba5f88b0f01
	7a3c156521393       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   17319112356c8
	
	
	==> coredns [064346220ce8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1036320117353325520.8167807097881693149. HINFO: read udp 10.244.0.2:35905->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1036320117353325520.8167807097881693149. HINFO: read udp 10.244.0.2:50183->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1036320117353325520.8167807097881693149. HINFO: read udp 10.244.0.2:50541->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1036320117353325520.8167807097881693149. HINFO: read udp 10.244.0.2:54008->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1036320117353325520.8167807097881693149. HINFO: read udp 10.244.0.2:45484->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1036320117353325520.8167807097881693149. HINFO: read udp 10.244.0.2:39931->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b7ffe6884035] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4873758302353374495.3774336094057970426. HINFO: read udp 10.244.0.3:56874->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4873758302353374495.3774336094057970426. HINFO: read udp 10.244.0.3:58967->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4873758302353374495.3774336094057970426. HINFO: read udp 10.244.0.3:49453->10.0.2.3:53: i/o timeout
	
	
	==> coredns [d047ed2773c3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4058351176073126358.3233726570682145532. HINFO: read udp 10.244.0.2:54958->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4058351176073126358.3233726570682145532. HINFO: read udp 10.244.0.2:59047->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4058351176073126358.3233726570682145532. HINFO: read udp 10.244.0.2:38586->10.0.2.3:53: i/o timeout
	
	
	==> coredns [f549e8e25192] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4595385980663877192.3934266177322435816. HINFO: read udp 10.244.0.3:32839->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4595385980663877192.3934266177322435816. HINFO: read udp 10.244.0.3:52622->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4595385980663877192.3934266177322435816. HINFO: read udp 10.244.0.3:50554->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4595385980663877192.3934266177322435816. HINFO: read udp 10.244.0.3:41118->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4595385980663877192.3934266177322435816. HINFO: read udp 10.244.0.3:36011->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4595385980663877192.3934266177322435816. HINFO: read udp 10.244.0.3:32780->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4595385980663877192.3934266177322435816. HINFO: read udp 10.244.0.3:56478->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4595385980663877192.3934266177322435816. HINFO: read udp 10.244.0.3:49517->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4595385980663877192.3934266177322435816. HINFO: read udp 10.244.0.3:52971->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4595385980663877192.3934266177322435816. HINFO: read udp 10.244.0.3:39567->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-478000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-478000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af
	                    minikube.k8s.io/name=running-upgrade-478000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_04T13_16_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Sep 2024 20:16:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-478000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Sep 2024 20:20:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Sep 2024 20:16:27 +0000   Wed, 04 Sep 2024 20:16:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Sep 2024 20:16:27 +0000   Wed, 04 Sep 2024 20:16:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Sep 2024 20:16:27 +0000   Wed, 04 Sep 2024 20:16:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Sep 2024 20:16:27 +0000   Wed, 04 Sep 2024 20:16:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-478000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c1da0e07ca94b149a68cfa06efc7a55
	  System UUID:                5c1da0e07ca94b149a68cfa06efc7a55
	  Boot ID:                    fa85a5ea-58d2-4e06-bfc6-1036198a9cc6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-4c6b7                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-9m5qb                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-478000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-478000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-478000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-lzr48                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-478000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-478000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-478000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-478000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-478000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-478000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-478000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-478000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-478000 event: Registered Node running-upgrade-478000 in Controller
	
	
	==> dmesg <==
	[  +1.659254] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.082735] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.080750] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.135065] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.097495] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.079427] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +2.590738] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[Sep 4 20:12] systemd-fstab-generator[1833]: Ignoring "noauto" for root device
	[  +2.584607] systemd-fstab-generator[2189]: Ignoring "noauto" for root device
	[  +0.140515] systemd-fstab-generator[2220]: Ignoring "noauto" for root device
	[  +0.105120] systemd-fstab-generator[2234]: Ignoring "noauto" for root device
	[  +0.097050] systemd-fstab-generator[2249]: Ignoring "noauto" for root device
	[  +1.534497] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.133419] systemd-fstab-generator[2668]: Ignoring "noauto" for root device
	[  +0.076554] systemd-fstab-generator[2679]: Ignoring "noauto" for root device
	[  +0.071828] systemd-fstab-generator[2690]: Ignoring "noauto" for root device
	[  +0.100184] systemd-fstab-generator[2704]: Ignoring "noauto" for root device
	[  +2.262617] systemd-fstab-generator[2860]: Ignoring "noauto" for root device
	[  +3.347039] systemd-fstab-generator[3265]: Ignoring "noauto" for root device
	[  +1.423154] systemd-fstab-generator[3895]: Ignoring "noauto" for root device
	[ +19.244339] kauditd_printk_skb: 68 callbacks suppressed
	[Sep 4 20:16] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.280295] systemd-fstab-generator[12082]: Ignoring "noauto" for root device
	[  +5.639009] systemd-fstab-generator[12681]: Ignoring "noauto" for root device
	[  +0.474072] systemd-fstab-generator[12813]: Ignoring "noauto" for root device
	
	
	==> etcd [7a3c15652139] <==
	{"level":"info","ts":"2024-09-04T20:16:22.563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-04T20:16:22.563Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-04T20:16:22.573Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-04T20:16:22.573Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-04T20:16:22.573Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-04T20:16:22.573Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-04T20:16:22.573Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-04T20:16:22.859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-04T20:16:22.859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-04T20:16:22.859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-04T20:16:22.859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-04T20:16:22.859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-04T20:16:22.859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-04T20:16:22.859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-04T20:16:22.860Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-04T20:16:22.860Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-478000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-04T20:16:22.860Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-04T20:16:22.861Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-04T20:16:22.861Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-04T20:16:22.861Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-04T20:16:22.861Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-04T20:16:22.861Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-04T20:16:22.861Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-04T20:16:22.873Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-04T20:16:22.873Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:20:44 up 9 min,  0 users,  load average: 0.37, 0.35, 0.18
	Linux running-upgrade-478000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [fee74e624df2] <==
	I0904 20:16:24.472319       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0904 20:16:24.475511       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0904 20:16:24.475555       1 cache.go:39] Caches are synced for autoregister controller
	I0904 20:16:24.475632       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0904 20:16:24.475638       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0904 20:16:24.483587       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0904 20:16:24.521959       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0904 20:16:25.203586       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0904 20:16:25.378804       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0904 20:16:25.382755       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0904 20:16:25.382776       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0904 20:16:25.537401       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0904 20:16:25.547231       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0904 20:16:25.647669       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0904 20:16:25.650196       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0904 20:16:25.650542       1 controller.go:611] quota admission added evaluator for: endpoints
	I0904 20:16:25.651819       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0904 20:16:26.520571       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0904 20:16:27.044039       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0904 20:16:27.047885       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0904 20:16:27.056093       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0904 20:16:27.117719       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0904 20:16:39.915529       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0904 20:16:40.561792       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0904 20:16:41.357891       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [c8e8e98b2e71] <==
	I0904 20:16:39.613451       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0904 20:16:39.613456       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0904 20:16:39.613468       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0904 20:16:39.613906       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0904 20:16:39.623948       1 shared_informer.go:262] Caches are synced for attach detach
	I0904 20:16:39.623974       1 shared_informer.go:262] Caches are synced for taint
	I0904 20:16:39.624014       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0904 20:16:39.624044       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-478000. Assuming now as a timestamp.
	I0904 20:16:39.624060       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0904 20:16:39.624384       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0904 20:16:39.624485       1 event.go:294] "Event occurred" object="running-upgrade-478000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-478000 event: Registered Node running-upgrade-478000 in Controller"
	I0904 20:16:39.633289       1 shared_informer.go:262] Caches are synced for crt configmap
	I0904 20:16:39.662305       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0904 20:16:39.764085       1 shared_informer.go:262] Caches are synced for stateful set
	I0904 20:16:39.789260       1 shared_informer.go:262] Caches are synced for resource quota
	I0904 20:16:39.796708       1 shared_informer.go:262] Caches are synced for daemon sets
	I0904 20:16:39.811594       1 shared_informer.go:262] Caches are synced for persistent volume
	I0904 20:16:39.812772       1 shared_informer.go:262] Caches are synced for resource quota
	I0904 20:16:39.918520       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0904 20:16:40.243226       1 shared_informer.go:262] Caches are synced for garbage collector
	I0904 20:16:40.299962       1 shared_informer.go:262] Caches are synced for garbage collector
	I0904 20:16:40.300047       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0904 20:16:40.564900       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lzr48"
	I0904 20:16:40.615564       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-4c6b7"
	I0904 20:16:40.619950       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-9m5qb"
	
	
	==> kube-proxy [7a5f2394b31f] <==
	I0904 20:16:41.347542       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0904 20:16:41.347588       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0904 20:16:41.347600       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0904 20:16:41.356082       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0904 20:16:41.356094       1 server_others.go:206] "Using iptables Proxier"
	I0904 20:16:41.356105       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0904 20:16:41.356258       1 server.go:661] "Version info" version="v1.24.1"
	I0904 20:16:41.356266       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 20:16:41.356565       1 config.go:317] "Starting service config controller"
	I0904 20:16:41.356575       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0904 20:16:41.356608       1 config.go:226] "Starting endpoint slice config controller"
	I0904 20:16:41.356618       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0904 20:16:41.356908       1 config.go:444] "Starting node config controller"
	I0904 20:16:41.356934       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0904 20:16:41.456789       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0904 20:16:41.456867       1 shared_informer.go:262] Caches are synced for service config
	I0904 20:16:41.457023       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [6e32ad2a10f6] <==
	W0904 20:16:24.443819       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0904 20:16:24.443867       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0904 20:16:24.443998       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0904 20:16:24.444039       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0904 20:16:24.444069       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0904 20:16:24.444085       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0904 20:16:24.444122       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0904 20:16:24.444141       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0904 20:16:24.444179       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0904 20:16:24.444268       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0904 20:16:24.444314       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0904 20:16:24.444344       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0904 20:16:24.444389       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0904 20:16:24.444410       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0904 20:16:24.444454       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 20:16:24.444473       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	W0904 20:16:24.444890       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0904 20:16:24.444901       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 20:16:25.288998       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0904 20:16:25.289313       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0904 20:16:25.290367       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0904 20:16:25.290427       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0904 20:16:25.473198       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0904 20:16:25.473214       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0904 20:16:25.945939       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-09-04 20:11:40 UTC, ends at Wed 2024-09-04 20:20:44 UTC. --
	Sep 04 20:16:39 running-upgrade-478000 kubelet[12687]: I0904 20:16:39.581117   12687 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 04 20:16:39 running-upgrade-478000 kubelet[12687]: I0904 20:16:39.581519   12687 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 04 20:16:39 running-upgrade-478000 kubelet[12687]: I0904 20:16:39.634379   12687 topology_manager.go:200] "Topology Admit Handler"
	Sep 04 20:16:39 running-upgrade-478000 kubelet[12687]: I0904 20:16:39.786606   12687 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n528p\" (UniqueName: \"kubernetes.io/projected/efe1de03-ebf5-4371-839f-5bdfbf0e6766-kube-api-access-n528p\") pod \"storage-provisioner\" (UID: \"efe1de03-ebf5-4371-839f-5bdfbf0e6766\") " pod="kube-system/storage-provisioner"
	Sep 04 20:16:39 running-upgrade-478000 kubelet[12687]: I0904 20:16:39.786644   12687 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/efe1de03-ebf5-4371-839f-5bdfbf0e6766-tmp\") pod \"storage-provisioner\" (UID: \"efe1de03-ebf5-4371-839f-5bdfbf0e6766\") " pod="kube-system/storage-provisioner"
	Sep 04 20:16:39 running-upgrade-478000 kubelet[12687]: E0904 20:16:39.892243   12687 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 04 20:16:39 running-upgrade-478000 kubelet[12687]: E0904 20:16:39.892270   12687 projected.go:192] Error preparing data for projected volume kube-api-access-n528p for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 04 20:16:39 running-upgrade-478000 kubelet[12687]: E0904 20:16:39.892321   12687 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/efe1de03-ebf5-4371-839f-5bdfbf0e6766-kube-api-access-n528p podName:efe1de03-ebf5-4371-839f-5bdfbf0e6766 nodeName:}" failed. No retries permitted until 2024-09-04 20:16:40.392302313 +0000 UTC m=+13.365631345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n528p" (UniqueName: "kubernetes.io/projected/efe1de03-ebf5-4371-839f-5bdfbf0e6766-kube-api-access-n528p") pod "storage-provisioner" (UID: "efe1de03-ebf5-4371-839f-5bdfbf0e6766") : configmap "kube-root-ca.crt" not found
	Sep 04 20:16:40 running-upgrade-478000 kubelet[12687]: E0904 20:16:40.396550   12687 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 04 20:16:40 running-upgrade-478000 kubelet[12687]: E0904 20:16:40.396575   12687 projected.go:192] Error preparing data for projected volume kube-api-access-n528p for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 04 20:16:40 running-upgrade-478000 kubelet[12687]: E0904 20:16:40.396615   12687 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/efe1de03-ebf5-4371-839f-5bdfbf0e6766-kube-api-access-n528p podName:efe1de03-ebf5-4371-839f-5bdfbf0e6766 nodeName:}" failed. No retries permitted until 2024-09-04 20:16:41.396601075 +0000 UTC m=+14.369930107 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-n528p" (UniqueName: "kubernetes.io/projected/efe1de03-ebf5-4371-839f-5bdfbf0e6766-kube-api-access-n528p") pod "storage-provisioner" (UID: "efe1de03-ebf5-4371-839f-5bdfbf0e6766") : configmap "kube-root-ca.crt" not found
	Sep 04 20:16:40 running-upgrade-478000 kubelet[12687]: I0904 20:16:40.567078   12687 topology_manager.go:200] "Topology Admit Handler"
	Sep 04 20:16:40 running-upgrade-478000 kubelet[12687]: I0904 20:16:40.615085   12687 topology_manager.go:200] "Topology Admit Handler"
	Sep 04 20:16:40 running-upgrade-478000 kubelet[12687]: I0904 20:16:40.625714   12687 topology_manager.go:200] "Topology Admit Handler"
	Sep 04 20:16:40 running-upgrade-478000 kubelet[12687]: I0904 20:16:40.705163   12687 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9542f720-d00e-49be-9fa2-5935cba45b97-xtables-lock\") pod \"kube-proxy-lzr48\" (UID: \"9542f720-d00e-49be-9fa2-5935cba45b97\") " pod="kube-system/kube-proxy-lzr48"
	Sep 04 20:16:40 running-upgrade-478000 kubelet[12687]: I0904 20:16:40.705193   12687 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7p5q\" (UniqueName: \"kubernetes.io/projected/9542f720-d00e-49be-9fa2-5935cba45b97-kube-api-access-l7p5q\") pod \"kube-proxy-lzr48\" (UID: \"9542f720-d00e-49be-9fa2-5935cba45b97\") " pod="kube-system/kube-proxy-lzr48"
	Sep 04 20:16:40 running-upgrade-478000 kubelet[12687]: I0904 20:16:40.705205   12687 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9542f720-d00e-49be-9fa2-5935cba45b97-kube-proxy\") pod \"kube-proxy-lzr48\" (UID: \"9542f720-d00e-49be-9fa2-5935cba45b97\") " pod="kube-system/kube-proxy-lzr48"
	Sep 04 20:16:40 running-upgrade-478000 kubelet[12687]: I0904 20:16:40.705220   12687 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9542f720-d00e-49be-9fa2-5935cba45b97-lib-modules\") pod \"kube-proxy-lzr48\" (UID: \"9542f720-d00e-49be-9fa2-5935cba45b97\") " pod="kube-system/kube-proxy-lzr48"
	Sep 04 20:16:40 running-upgrade-478000 kubelet[12687]: I0904 20:16:40.807989   12687 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpfv4\" (UniqueName: \"kubernetes.io/projected/77f62e6e-705e-481d-a013-baa77e1ce5cd-kube-api-access-rpfv4\") pod \"coredns-6d4b75cb6d-4c6b7\" (UID: \"77f62e6e-705e-481d-a013-baa77e1ce5cd\") " pod="kube-system/coredns-6d4b75cb6d-4c6b7"
	Sep 04 20:16:40 running-upgrade-478000 kubelet[12687]: I0904 20:16:40.808023   12687 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkcfd\" (UniqueName: \"kubernetes.io/projected/599cca29-5a07-459c-a083-74e69be2cdfd-kube-api-access-dkcfd\") pod \"coredns-6d4b75cb6d-9m5qb\" (UID: \"599cca29-5a07-459c-a083-74e69be2cdfd\") " pod="kube-system/coredns-6d4b75cb6d-9m5qb"
	Sep 04 20:16:40 running-upgrade-478000 kubelet[12687]: I0904 20:16:40.808043   12687 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77f62e6e-705e-481d-a013-baa77e1ce5cd-config-volume\") pod \"coredns-6d4b75cb6d-4c6b7\" (UID: \"77f62e6e-705e-481d-a013-baa77e1ce5cd\") " pod="kube-system/coredns-6d4b75cb6d-4c6b7"
	Sep 04 20:16:40 running-upgrade-478000 kubelet[12687]: I0904 20:16:40.808059   12687 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/599cca29-5a07-459c-a083-74e69be2cdfd-config-volume\") pod \"coredns-6d4b75cb6d-9m5qb\" (UID: \"599cca29-5a07-459c-a083-74e69be2cdfd\") " pod="kube-system/coredns-6d4b75cb6d-9m5qb"
	Sep 04 20:16:41 running-upgrade-478000 kubelet[12687]: I0904 20:16:41.252344   12687 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="1f84b8131fc57be83179a5829d043ab4a0d25f5f3bf4aa8af0142f441e3b816e"
	Sep 04 20:20:28 running-upgrade-478000 kubelet[12687]: I0904 20:20:28.810493   12687 scope.go:110] "RemoveContainer" containerID="c2e4fd07d88102c549268a5ff08cba2cd554cc63ec064ee04972996ecacf6125"
	Sep 04 20:20:28 running-upgrade-478000 kubelet[12687]: I0904 20:20:28.822106   12687 scope.go:110] "RemoveContainer" containerID="083b85426991a321195a5296f0f60fa47c63df49994d5d1befe06edaf4149341"
	
	
	==> storage-provisioner [9b415d028298] <==
	I0904 20:16:41.716524       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0904 20:16:41.723779       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0904 20:16:41.723797       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0904 20:16:41.727929       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0904 20:16:41.728171       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4bfc9bf8-b014-4bb7-9255-b20fb53971d1", APIVersion:"v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-478000_ba3188f5-0aa6-4bd2-bb23-36025d4364ca became leader
	I0904 20:16:41.728264       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-478000_ba3188f5-0aa6-4bd2-bb23-36025d4364ca!
	I0904 20:16:41.830562       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-478000_ba3188f5-0aa6-4bd2-bb23-36025d4364ca!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-478000 -n running-upgrade-478000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-478000 -n running-upgrade-478000: exit status 2 (15.652668416s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-478000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-478000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-478000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-478000: (1.217478375s)
--- FAIL: TestRunningBinaryUpgrade (591.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.89s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-895000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-895000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.873891166s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-895000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-895000" primary control-plane node in "kubernetes-upgrade-895000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-895000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:14:09.817194    4574 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:14:09.817322    4574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:14:09.817326    4574 out.go:358] Setting ErrFile to fd 2...
	I0904 13:14:09.817328    4574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:14:09.817458    4574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:14:09.818613    4574 out.go:352] Setting JSON to false
	I0904 13:14:09.835040    4574 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4413,"bootTime":1725476436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:14:09.835128    4574 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:14:09.842055    4574 out.go:177] * [kubernetes-upgrade-895000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:14:09.849023    4574 notify.go:220] Checking for updates...
	I0904 13:14:09.853696    4574 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:14:09.861863    4574 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:14:09.864791    4574 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:14:09.867872    4574 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:14:09.870885    4574 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:14:09.873842    4574 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:14:09.877168    4574 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:14:09.877239    4574 config.go:182] Loaded profile config "running-upgrade-478000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:14:09.877281    4574 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:14:09.880889    4574 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:14:09.887847    4574 start.go:297] selected driver: qemu2
	I0904 13:14:09.887854    4574 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:14:09.887861    4574 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:14:09.890374    4574 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:14:09.892866    4574 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:14:09.895866    4574 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 13:14:09.895883    4574 cni.go:84] Creating CNI manager for ""
	I0904 13:14:09.895891    4574 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0904 13:14:09.895931    4574 start.go:340] cluster config:
	{Name:kubernetes-upgrade-895000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-895000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:14:09.900207    4574 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:14:09.908710    4574 out.go:177] * Starting "kubernetes-upgrade-895000" primary control-plane node in "kubernetes-upgrade-895000" cluster
	I0904 13:14:09.912817    4574 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0904 13:14:09.912837    4574 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0904 13:14:09.912844    4574 cache.go:56] Caching tarball of preloaded images
	I0904 13:14:09.912905    4574 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:14:09.912910    4574 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0904 13:14:09.912978    4574 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/kubernetes-upgrade-895000/config.json ...
	I0904 13:14:09.912999    4574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/kubernetes-upgrade-895000/config.json: {Name:mk366dbbd811d9c2968085404cb6607a59ee4998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:14:09.913353    4574 start.go:360] acquireMachinesLock for kubernetes-upgrade-895000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:14:09.913393    4574 start.go:364] duration metric: took 31.625µs to acquireMachinesLock for "kubernetes-upgrade-895000"
	I0904 13:14:09.913407    4574 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-895000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernet
es-upgrade-895000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:14:09.913436    4574 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:14:09.920810    4574 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0904 13:14:09.936574    4574 start.go:159] libmachine.API.Create for "kubernetes-upgrade-895000" (driver="qemu2")
	I0904 13:14:09.936605    4574 client.go:168] LocalClient.Create starting
	I0904 13:14:09.936683    4574 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:14:09.936712    4574 main.go:141] libmachine: Decoding PEM data...
	I0904 13:14:09.936724    4574 main.go:141] libmachine: Parsing certificate...
	I0904 13:14:09.936758    4574 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:14:09.936780    4574 main.go:141] libmachine: Decoding PEM data...
	I0904 13:14:09.936785    4574 main.go:141] libmachine: Parsing certificate...
	I0904 13:14:09.937174    4574 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:14:10.108371    4574 main.go:141] libmachine: Creating SSH key...
	I0904 13:14:10.186902    4574 main.go:141] libmachine: Creating Disk image...
	I0904 13:14:10.186907    4574 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:14:10.187133    4574 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/disk.qcow2
	I0904 13:14:10.196355    4574 main.go:141] libmachine: STDOUT: 
	I0904 13:14:10.196374    4574 main.go:141] libmachine: STDERR: 
	I0904 13:14:10.196426    4574 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/disk.qcow2 +20000M
	I0904 13:14:10.204546    4574 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:14:10.204564    4574 main.go:141] libmachine: STDERR: 
	I0904 13:14:10.204574    4574 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/disk.qcow2
	I0904 13:14:10.204579    4574 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:14:10.204592    4574 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:14:10.204622    4574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:25:50:73:33:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/disk.qcow2
	I0904 13:14:10.206189    4574 main.go:141] libmachine: STDOUT: 
	I0904 13:14:10.206209    4574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:14:10.206226    4574 client.go:171] duration metric: took 269.622125ms to LocalClient.Create
	I0904 13:14:12.208419    4574 start.go:128] duration metric: took 2.294994333s to createHost
	I0904 13:14:12.208523    4574 start.go:83] releasing machines lock for "kubernetes-upgrade-895000", held for 2.295158125s
	W0904 13:14:12.208585    4574 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:14:12.218873    4574 out.go:177] * Deleting "kubernetes-upgrade-895000" in qemu2 ...
	W0904 13:14:12.248533    4574 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:14:12.248568    4574 start.go:729] Will try again in 5 seconds ...
	I0904 13:14:17.250698    4574 start.go:360] acquireMachinesLock for kubernetes-upgrade-895000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:14:17.251248    4574 start.go:364] duration metric: took 470.167µs to acquireMachinesLock for "kubernetes-upgrade-895000"
	I0904 13:14:17.251402    4574 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-895000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernet
es-upgrade-895000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:14:17.251597    4574 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:14:17.260299    4574 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0904 13:14:17.307670    4574 start.go:159] libmachine.API.Create for "kubernetes-upgrade-895000" (driver="qemu2")
	I0904 13:14:17.307728    4574 client.go:168] LocalClient.Create starting
	I0904 13:14:17.307850    4574 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:14:17.307925    4574 main.go:141] libmachine: Decoding PEM data...
	I0904 13:14:17.307941    4574 main.go:141] libmachine: Parsing certificate...
	I0904 13:14:17.308006    4574 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:14:17.308050    4574 main.go:141] libmachine: Decoding PEM data...
	I0904 13:14:17.308065    4574 main.go:141] libmachine: Parsing certificate...
	I0904 13:14:17.308593    4574 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:14:17.478684    4574 main.go:141] libmachine: Creating SSH key...
	I0904 13:14:17.604064    4574 main.go:141] libmachine: Creating Disk image...
	I0904 13:14:17.604071    4574 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:14:17.604329    4574 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/disk.qcow2
	I0904 13:14:17.614056    4574 main.go:141] libmachine: STDOUT: 
	I0904 13:14:17.614080    4574 main.go:141] libmachine: STDERR: 
	I0904 13:14:17.614136    4574 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/disk.qcow2 +20000M
	I0904 13:14:17.622272    4574 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:14:17.622290    4574 main.go:141] libmachine: STDERR: 
	I0904 13:14:17.622310    4574 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/disk.qcow2
	I0904 13:14:17.622313    4574 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:14:17.622325    4574 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:14:17.622352    4574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:d1:02:50:aa:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/disk.qcow2
	I0904 13:14:17.624216    4574 main.go:141] libmachine: STDOUT: 
	I0904 13:14:17.624230    4574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:14:17.624243    4574 client.go:171] duration metric: took 316.514583ms to LocalClient.Create
	I0904 13:14:19.626434    4574 start.go:128] duration metric: took 2.374827042s to createHost
	I0904 13:14:19.626539    4574 start.go:83] releasing machines lock for "kubernetes-upgrade-895000", held for 2.375306167s
	W0904 13:14:19.626937    4574 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-895000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-895000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:14:19.636507    4574 out.go:201] 
	W0904 13:14:19.642681    4574 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:14:19.642712    4574 out.go:270] * 
	* 
	W0904 13:14:19.644225    4574 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:14:19.652654    4574 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-895000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-895000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-895000: (3.6189485s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-895000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-895000 status --format={{.Host}}: exit status 7 (57.370875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-895000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-895000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.176332125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-895000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-895000" primary control-plane node in "kubernetes-upgrade-895000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-895000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-895000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:14:23.370761    4616 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:14:23.370878    4616 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:14:23.370881    4616 out.go:358] Setting ErrFile to fd 2...
	I0904 13:14:23.370884    4616 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:14:23.371024    4616 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:14:23.372026    4616 out.go:352] Setting JSON to false
	I0904 13:14:23.388459    4616 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4427,"bootTime":1725476436,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:14:23.388540    4616 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:14:23.391212    4616 out.go:177] * [kubernetes-upgrade-895000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:14:23.398306    4616 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:14:23.398337    4616 notify.go:220] Checking for updates...
	I0904 13:14:23.404260    4616 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:14:23.407281    4616 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:14:23.410283    4616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:14:23.413276    4616 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:14:23.416313    4616 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:14:23.419556    4616 config.go:182] Loaded profile config "kubernetes-upgrade-895000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0904 13:14:23.419823    4616 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:14:23.424284    4616 out.go:177] * Using the qemu2 driver based on existing profile
	I0904 13:14:23.430184    4616 start.go:297] selected driver: qemu2
	I0904 13:14:23.430189    4616 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-895000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-
upgrade-895000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:14:23.430250    4616 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:14:23.432514    4616 cni.go:84] Creating CNI manager for ""
	I0904 13:14:23.432531    4616 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:14:23.432565    4616 start.go:340] cluster config:
	{Name:kubernetes-upgrade-895000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-895000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client S
ocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:14:23.436015    4616 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:14:23.444294    4616 out.go:177] * Starting "kubernetes-upgrade-895000" primary control-plane node in "kubernetes-upgrade-895000" cluster
	I0904 13:14:23.448199    4616 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:14:23.448212    4616 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:14:23.448224    4616 cache.go:56] Caching tarball of preloaded images
	I0904 13:14:23.448282    4616 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:14:23.448288    4616 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:14:23.448342    4616 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/kubernetes-upgrade-895000/config.json ...
	I0904 13:14:23.448890    4616 start.go:360] acquireMachinesLock for kubernetes-upgrade-895000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:14:23.448924    4616 start.go:364] duration metric: took 28µs to acquireMachinesLock for "kubernetes-upgrade-895000"
	I0904 13:14:23.448933    4616 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:14:23.448939    4616 fix.go:54] fixHost starting: 
	I0904 13:14:23.449058    4616 fix.go:112] recreateIfNeeded on kubernetes-upgrade-895000: state=Stopped err=<nil>
	W0904 13:14:23.449067    4616 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:14:23.453283    4616 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-895000" ...
	I0904 13:14:23.461267    4616 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:14:23.461304    4616 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:d1:02:50:aa:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/disk.qcow2
	I0904 13:14:23.463260    4616 main.go:141] libmachine: STDOUT: 
	I0904 13:14:23.463283    4616 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:14:23.463312    4616 fix.go:56] duration metric: took 14.372416ms for fixHost
	I0904 13:14:23.463317    4616 start.go:83] releasing machines lock for "kubernetes-upgrade-895000", held for 14.389291ms
	W0904 13:14:23.463322    4616 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:14:23.463364    4616 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:14:23.463368    4616 start.go:729] Will try again in 5 seconds ...
	I0904 13:14:28.465420    4616 start.go:360] acquireMachinesLock for kubernetes-upgrade-895000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:14:28.465824    4616 start.go:364] duration metric: took 315.791µs to acquireMachinesLock for "kubernetes-upgrade-895000"
	I0904 13:14:28.466052    4616 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:14:28.466073    4616 fix.go:54] fixHost starting: 
	I0904 13:14:28.466789    4616 fix.go:112] recreateIfNeeded on kubernetes-upgrade-895000: state=Stopped err=<nil>
	W0904 13:14:28.466816    4616 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:14:28.476399    4616 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-895000" ...
	I0904 13:14:28.480377    4616 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:14:28.480592    4616 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:d1:02:50:aa:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubernetes-upgrade-895000/disk.qcow2
	I0904 13:14:28.490852    4616 main.go:141] libmachine: STDOUT: 
	I0904 13:14:28.490919    4616 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:14:28.491064    4616 fix.go:56] duration metric: took 24.992917ms for fixHost
	I0904 13:14:28.491090    4616 start.go:83] releasing machines lock for "kubernetes-upgrade-895000", held for 25.176125ms
	W0904 13:14:28.491291    4616 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-895000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-895000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:14:28.495364    4616 out.go:201] 
	W0904 13:14:28.498438    4616 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:14:28.498455    4616 out.go:270] * 
	* 
	W0904 13:14:28.500103    4616 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:14:28.509407    4616 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-895000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-895000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-895000 version --output=json: exit status 1 (51.1285ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-895000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-09-04 13:14:28.570095 -0700 PDT m=+2985.632106417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-895000 -n kubernetes-upgrade-895000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-895000 -n kubernetes-upgrade-895000: exit status 7 (31.119041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-895000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-895000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-895000
--- FAIL: TestKubernetesUpgrade (18.89s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.65s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19575
- KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1056499175/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.65s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.29s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19575
- KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2479112973/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3713255739 start -p stopped-upgrade-175000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3713255739 start -p stopped-upgrade-175000 --memory=2200 --vm-driver=qemu2 : (39.432761292s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3713255739 -p stopped-upgrade-175000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3713255739 -p stopped-upgrade-175000 stop: (12.108060709s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-175000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0904 13:16:30.877018    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 13:18:27.776873    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 13:18:55.861532    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-175000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.281371666s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-175000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-175000" primary control-plane node in "stopped-upgrade-175000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-175000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:15:21.440044    4660 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:15:21.440187    4660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:15:21.440191    4660 out.go:358] Setting ErrFile to fd 2...
	I0904 13:15:21.440194    4660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:15:21.440336    4660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:15:21.441499    4660 out.go:352] Setting JSON to false
	I0904 13:15:21.461219    4660 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4485,"bootTime":1725476436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:15:21.461305    4660 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:15:21.466578    4660 out.go:177] * [stopped-upgrade-175000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:15:21.474444    4660 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:15:21.474501    4660 notify.go:220] Checking for updates...
	I0904 13:15:21.481523    4660 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:15:21.484530    4660 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:15:21.487574    4660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:15:21.490498    4660 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:15:21.493523    4660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:15:21.496668    4660 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:15:21.499455    4660 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0904 13:15:21.502532    4660 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:15:21.505368    4660 out.go:177] * Using the qemu2 driver based on existing profile
	I0904 13:15:21.512496    4660 start.go:297] selected driver: qemu2
	I0904 13:15:21.512502    4660 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-175000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50564 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-175000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0904 13:15:21.512552    4660 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:15:21.515184    4660 cni.go:84] Creating CNI manager for ""
	I0904 13:15:21.515201    4660 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:15:21.515228    4660 start.go:340] cluster config:
	{Name:stopped-upgrade-175000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50564 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-175000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0904 13:15:21.515280    4660 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:15:21.523454    4660 out.go:177] * Starting "stopped-upgrade-175000" primary control-plane node in "stopped-upgrade-175000" cluster
	I0904 13:15:21.527521    4660 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0904 13:15:21.527538    4660 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0904 13:15:21.527546    4660 cache.go:56] Caching tarball of preloaded images
	I0904 13:15:21.527607    4660 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:15:21.527618    4660 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0904 13:15:21.527678    4660 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/config.json ...
	I0904 13:15:21.528196    4660 start.go:360] acquireMachinesLock for stopped-upgrade-175000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:15:21.528228    4660 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "stopped-upgrade-175000"
	I0904 13:15:21.528238    4660 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:15:21.528243    4660 fix.go:54] fixHost starting: 
	I0904 13:15:21.528349    4660 fix.go:112] recreateIfNeeded on stopped-upgrade-175000: state=Stopped err=<nil>
	W0904 13:15:21.528357    4660 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:15:21.532535    4660 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-175000" ...
	I0904 13:15:21.540521    4660 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:15:21.540607    4660 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50529-:22,hostfwd=tcp::50530-:2376,hostname=stopped-upgrade-175000 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/disk.qcow2
	I0904 13:15:21.586534    4660 main.go:141] libmachine: STDOUT: 
	I0904 13:15:21.586567    4660 main.go:141] libmachine: STDERR: 
	I0904 13:15:21.586572    4660 main.go:141] libmachine: Waiting for VM to start (ssh -p 50529 docker@127.0.0.1)...
	I0904 13:15:41.409785    4660 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/config.json ...
	I0904 13:15:41.410336    4660 machine.go:93] provisionDockerMachine start ...
	I0904 13:15:41.410437    4660 main.go:141] libmachine: Using SSH client type: native
	I0904 13:15:41.410818    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bc45a0] 0x100bc6e00 <nil>  [] 0s} localhost 50529 <nil> <nil>}
	I0904 13:15:41.410829    4660 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 13:15:41.490053    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0904 13:15:41.490083    4660 buildroot.go:166] provisioning hostname "stopped-upgrade-175000"
	I0904 13:15:41.490184    4660 main.go:141] libmachine: Using SSH client type: native
	I0904 13:15:41.490383    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bc45a0] 0x100bc6e00 <nil>  [] 0s} localhost 50529 <nil> <nil>}
	I0904 13:15:41.490394    4660 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-175000 && echo "stopped-upgrade-175000" | sudo tee /etc/hostname
	I0904 13:15:41.564232    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-175000
	
	I0904 13:15:41.564283    4660 main.go:141] libmachine: Using SSH client type: native
	I0904 13:15:41.564397    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bc45a0] 0x100bc6e00 <nil>  [] 0s} localhost 50529 <nil> <nil>}
	I0904 13:15:41.564406    4660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-175000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-175000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-175000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 13:15:41.630311    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 13:15:41.630324    4660 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19575-1140/.minikube CaCertPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19575-1140/.minikube}
	I0904 13:15:41.630332    4660 buildroot.go:174] setting up certificates
	I0904 13:15:41.630336    4660 provision.go:84] configureAuth start
	I0904 13:15:41.630341    4660 provision.go:143] copyHostCerts
	I0904 13:15:41.630424    4660 exec_runner.go:144] found /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.pem, removing ...
	I0904 13:15:41.630430    4660 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.pem
	I0904 13:15:41.630894    4660 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.pem (1078 bytes)
	I0904 13:15:41.631132    4660 exec_runner.go:144] found /Users/jenkins/minikube-integration/19575-1140/.minikube/cert.pem, removing ...
	I0904 13:15:41.631136    4660 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19575-1140/.minikube/cert.pem
	I0904 13:15:41.631200    4660 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19575-1140/.minikube/cert.pem (1123 bytes)
	I0904 13:15:41.631335    4660 exec_runner.go:144] found /Users/jenkins/minikube-integration/19575-1140/.minikube/key.pem, removing ...
	I0904 13:15:41.631338    4660 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19575-1140/.minikube/key.pem
	I0904 13:15:41.631396    4660 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19575-1140/.minikube/key.pem (1675 bytes)
	I0904 13:15:41.631489    4660 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-175000 san=[127.0.0.1 localhost minikube stopped-upgrade-175000]
	I0904 13:15:41.835985    4660 provision.go:177] copyRemoteCerts
	I0904 13:15:41.836031    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 13:15:41.836041    4660 sshutil.go:53] new ssh client: &{IP:localhost Port:50529 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/id_rsa Username:docker}
	I0904 13:15:41.870089    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 13:15:41.876906    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0904 13:15:41.883525    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0904 13:15:41.890338    4660 provision.go:87] duration metric: took 259.996875ms to configureAuth
	I0904 13:15:41.890347    4660 buildroot.go:189] setting minikube options for container-runtime
	I0904 13:15:41.890444    4660 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:15:41.890479    4660 main.go:141] libmachine: Using SSH client type: native
	I0904 13:15:41.890572    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bc45a0] 0x100bc6e00 <nil>  [] 0s} localhost 50529 <nil> <nil>}
	I0904 13:15:41.890576    4660 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0904 13:15:41.955514    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0904 13:15:41.955523    4660 buildroot.go:70] root file system type: tmpfs
	I0904 13:15:41.955578    4660 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0904 13:15:41.955645    4660 main.go:141] libmachine: Using SSH client type: native
	I0904 13:15:41.955758    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bc45a0] 0x100bc6e00 <nil>  [] 0s} localhost 50529 <nil> <nil>}
	I0904 13:15:41.955796    4660 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0904 13:15:42.022832    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0904 13:15:42.022890    4660 main.go:141] libmachine: Using SSH client type: native
	I0904 13:15:42.023008    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bc45a0] 0x100bc6e00 <nil>  [] 0s} localhost 50529 <nil> <nil>}
	I0904 13:15:42.023017    4660 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0904 13:15:42.362148    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0904 13:15:42.362160    4660 machine.go:96] duration metric: took 951.829833ms to provisionDockerMachine
	I0904 13:15:42.362167    4660 start.go:293] postStartSetup for "stopped-upgrade-175000" (driver="qemu2")
	I0904 13:15:42.362173    4660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 13:15:42.362237    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 13:15:42.362246    4660 sshutil.go:53] new ssh client: &{IP:localhost Port:50529 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/id_rsa Username:docker}
	I0904 13:15:42.397712    4660 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 13:15:42.398981    4660 info.go:137] Remote host: Buildroot 2021.02.12
	I0904 13:15:42.398988    4660 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19575-1140/.minikube/addons for local assets ...
	I0904 13:15:42.399073    4660 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19575-1140/.minikube/files for local assets ...
	I0904 13:15:42.399201    4660 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19575-1140/.minikube/files/etc/ssl/certs/16612.pem -> 16612.pem in /etc/ssl/certs
	I0904 13:15:42.399326    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 13:15:42.402204    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/files/etc/ssl/certs/16612.pem --> /etc/ssl/certs/16612.pem (1708 bytes)
	I0904 13:15:42.409281    4660 start.go:296] duration metric: took 47.110375ms for postStartSetup
	I0904 13:15:42.409294    4660 fix.go:56] duration metric: took 20.881357458s for fixHost
	I0904 13:15:42.409324    4660 main.go:141] libmachine: Using SSH client type: native
	I0904 13:15:42.409427    4660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bc45a0] 0x100bc6e00 <nil>  [] 0s} localhost 50529 <nil> <nil>}
	I0904 13:15:42.409432    4660 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 13:15:42.475839    4660 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725480942.098003921
	
	I0904 13:15:42.475848    4660 fix.go:216] guest clock: 1725480942.098003921
	I0904 13:15:42.475853    4660 fix.go:229] Guest: 2024-09-04 13:15:42.098003921 -0700 PDT Remote: 2024-09-04 13:15:42.409295 -0700 PDT m=+20.999083751 (delta=-311.291079ms)
	I0904 13:15:42.475866    4660 fix.go:200] guest clock delta is within tolerance: -311.291079ms
	I0904 13:15:42.475868    4660 start.go:83] releasing machines lock for "stopped-upgrade-175000", held for 20.94794175s
	I0904 13:15:42.475944    4660 ssh_runner.go:195] Run: cat /version.json
	I0904 13:15:42.475956    4660 sshutil.go:53] new ssh client: &{IP:localhost Port:50529 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/id_rsa Username:docker}
	I0904 13:15:42.475945    4660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 13:15:42.475993    4660 sshutil.go:53] new ssh client: &{IP:localhost Port:50529 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/id_rsa Username:docker}
	W0904 13:15:42.476630    4660 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50529: connect: connection refused
	I0904 13:15:42.476652    4660 retry.go:31] will retry after 230.38462ms: dial tcp [::1]:50529: connect: connection refused
	W0904 13:15:42.744588    4660 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0904 13:15:42.744685    4660 ssh_runner.go:195] Run: systemctl --version
	I0904 13:15:42.747262    4660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 13:15:42.749814    4660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 13:15:42.749853    4660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0904 13:15:42.754234    4660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0904 13:15:42.760437    4660 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0904 13:15:42.760447    4660 start.go:495] detecting cgroup driver to use...
	I0904 13:15:42.760518    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 13:15:42.768683    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0904 13:15:42.772381    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0904 13:15:42.775312    4660 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0904 13:15:42.775334    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0904 13:15:42.778469    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 13:15:42.781655    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0904 13:15:42.784681    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 13:15:42.787356    4660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 13:15:42.790458    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0904 13:15:42.793691    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0904 13:15:42.796662    4660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0904 13:15:42.799419    4660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 13:15:42.802350    4660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 13:15:42.805271    4660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:15:42.886931    4660 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0904 13:15:42.893417    4660 start.go:495] detecting cgroup driver to use...
	I0904 13:15:42.893480    4660 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0904 13:15:42.901682    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 13:15:42.906112    4660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 13:15:42.913130    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 13:15:42.917830    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 13:15:42.922382    4660 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0904 13:15:42.966079    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 13:15:42.971531    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 13:15:42.977232    4660 ssh_runner.go:195] Run: which cri-dockerd
	I0904 13:15:42.978292    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0904 13:15:42.980977    4660 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0904 13:15:42.985924    4660 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0904 13:15:43.063126    4660 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0904 13:15:43.130918    4660 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0904 13:15:43.130973    4660 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0904 13:15:43.136216    4660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:15:43.201303    4660 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0904 13:15:44.362251    4660 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.160950375s)
	I0904 13:15:44.362311    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0904 13:15:44.367068    4660 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0904 13:15:44.375310    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 13:15:44.380335    4660 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0904 13:15:44.450081    4660 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0904 13:15:44.514664    4660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:15:44.578322    4660 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0904 13:15:44.584734    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 13:15:44.588937    4660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:15:44.642106    4660 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0904 13:15:44.683033    4660 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0904 13:15:44.683113    4660 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0904 13:15:44.686399    4660 start.go:563] Will wait 60s for crictl version
	I0904 13:15:44.686455    4660 ssh_runner.go:195] Run: which crictl
	I0904 13:15:44.687698    4660 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 13:15:44.702300    4660 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0904 13:15:44.702371    4660 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 13:15:44.717769    4660 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 13:15:44.737646    4660 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0904 13:15:44.737730    4660 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0904 13:15:44.739207    4660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 13:15:44.743187    4660 kubeadm.go:883] updating cluster {Name:stopped-upgrade-175000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50564 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-175000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0904 13:15:44.743237    4660 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0904 13:15:44.743285    4660 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0904 13:15:44.754344    4660 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0904 13:15:44.754358    4660 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0904 13:15:44.754406    4660 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0904 13:15:44.758163    4660 ssh_runner.go:195] Run: which lz4
	I0904 13:15:44.759363    4660 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0904 13:15:44.760776    4660 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0904 13:15:44.760791    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0904 13:15:45.691201    4660 docker.go:649] duration metric: took 931.879167ms to copy over tarball
	I0904 13:15:45.691257    4660 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0904 13:15:46.843867    4660 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.152615375s)
	I0904 13:15:46.843885    4660 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0904 13:15:46.859292    4660 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0904 13:15:46.862604    4660 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0904 13:15:46.867786    4660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:15:46.933711    4660 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0904 13:15:48.595185    4660 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.661470667s)
	I0904 13:15:48.595270    4660 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0904 13:15:48.608213    4660 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0904 13:15:48.608222    4660 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0904 13:15:48.608228    4660 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0904 13:15:48.612694    4660 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:15:48.614342    4660 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0904 13:15:48.616684    4660 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0904 13:15:48.616726    4660 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:15:48.618333    4660 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0904 13:15:48.618348    4660 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0904 13:15:48.619539    4660 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0904 13:15:48.619645    4660 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0904 13:15:48.621097    4660 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0904 13:15:48.621323    4660 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:15:48.622308    4660 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0904 13:15:48.622420    4660 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0904 13:15:48.623324    4660 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0904 13:15:48.623593    4660 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:15:48.624555    4660 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0904 13:15:48.625447    4660 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0904 13:15:49.045018    4660 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0904 13:15:49.056672    4660 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0904 13:15:49.056697    4660 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0904 13:15:49.056746    4660 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0904 13:15:49.066938    4660 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0904 13:15:49.067331    4660 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0904 13:15:49.069242    4660 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0904 13:15:49.069252    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0904 13:15:49.070083    4660 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0904 13:15:49.077677    4660 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0904 13:15:49.077689    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0904 13:15:49.080961    4660 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0904 13:15:49.081408    4660 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0904 13:15:49.082576    4660 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0904 13:15:49.082592    4660 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0904 13:15:49.082620    4660 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0904 13:15:49.085464    4660 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0904 13:15:49.085576    4660 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:15:49.125286    4660 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0904 13:15:49.125310    4660 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0904 13:15:49.125327    4660 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0904 13:15:49.125349    4660 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0904 13:15:49.125360    4660 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0904 13:15:49.125387    4660 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0904 13:15:49.125389    4660 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0904 13:15:49.125360    4660 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0904 13:15:49.125396    4660 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0904 13:15:49.125413    4660 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:15:49.125429    4660 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0904 13:15:49.125717    4660 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0904 13:15:49.136351    4660 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0904 13:15:49.136483    4660 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0904 13:15:49.147102    4660 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0904 13:15:49.147129    4660 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0904 13:15:49.147231    4660 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0904 13:15:49.151980    4660 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0904 13:15:49.151988    4660 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0904 13:15:49.152002    4660 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0904 13:15:49.152008    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0904 13:15:49.152036    4660 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0904 13:15:49.152041    4660 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0904 13:15:49.152046    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0904 13:15:49.152272    4660 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0904 13:15:49.178363    4660 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0904 13:15:49.193299    4660 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0904 13:15:49.193328    4660 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0904 13:15:49.193391    4660 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0904 13:15:49.253025    4660 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0904 13:15:49.253863    4660 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0904 13:15:49.253871    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0904 13:15:49.373209    4660 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0904 13:15:49.408865    4660 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0904 13:15:49.408973    4660 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:15:49.458815    4660 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0904 13:15:49.458838    4660 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:15:49.458893    4660 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:15:49.464853    4660 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0904 13:15:49.464867    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0904 13:15:49.476912    4660 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0904 13:15:49.477045    4660 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0904 13:15:49.612179    4660 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0904 13:15:49.612218    4660 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0904 13:15:49.612246    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0904 13:15:49.646316    4660 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0904 13:15:49.646330    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0904 13:15:49.882145    4660 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0904 13:15:49.882191    4660 cache_images.go:92] duration metric: took 1.273976667s to LoadCachedImages
	W0904 13:15:49.882230    4660 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0904 13:15:49.882237    4660 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0904 13:15:49.882295    4660 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-175000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-175000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 13:15:49.882376    4660 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0904 13:15:49.896435    4660 cni.go:84] Creating CNI manager for ""
	I0904 13:15:49.896447    4660 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:15:49.896455    4660 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 13:15:49.896465    4660 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-175000 NodeName:stopped-upgrade-175000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 13:15:49.896523    4660 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-175000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 13:15:49.896579    4660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0904 13:15:49.899668    4660 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 13:15:49.899698    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 13:15:49.902495    4660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0904 13:15:49.907509    4660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 13:15:49.912365    4660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0904 13:15:49.917499    4660 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0904 13:15:49.918866    4660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 13:15:49.922480    4660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:15:49.995994    4660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 13:15:50.001903    4660 certs.go:68] Setting up /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000 for IP: 10.0.2.15
	I0904 13:15:50.001911    4660 certs.go:194] generating shared ca certs ...
	I0904 13:15:50.001920    4660 certs.go:226] acquiring lock for ca certs: {Name:mkd62cc1bdffb2500ac7e662aba46cadabbc6839 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:15:50.002111    4660 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.key
	I0904 13:15:50.002163    4660 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/proxy-client-ca.key
	I0904 13:15:50.002171    4660 certs.go:256] generating profile certs ...
	I0904 13:15:50.002255    4660 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/client.key
	I0904 13:15:50.002273    4660 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.key.d48d1bf3
	I0904 13:15:50.002286    4660 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.crt.d48d1bf3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0904 13:15:50.179626    4660 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.crt.d48d1bf3 ...
	I0904 13:15:50.179643    4660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.crt.d48d1bf3: {Name:mkd4e9ea02d9b84638975702181e1980ddc91b6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:15:50.180159    4660 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.key.d48d1bf3 ...
	I0904 13:15:50.180168    4660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.key.d48d1bf3: {Name:mkde62187c9daa95da8033e99db314a77b79f42b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:15:50.180325    4660 certs.go:381] copying /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.crt.d48d1bf3 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.crt
	I0904 13:15:50.180493    4660 certs.go:385] copying /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.key.d48d1bf3 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.key
	I0904 13:15:50.180657    4660 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/proxy-client.key
	I0904 13:15:50.180802    4660 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/1661.pem (1338 bytes)
	W0904 13:15:50.180831    4660 certs.go:480] ignoring /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/1661_empty.pem, impossibly tiny 0 bytes
	I0904 13:15:50.180836    4660 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 13:15:50.180865    4660 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem (1078 bytes)
	I0904 13:15:50.180887    4660 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem (1123 bytes)
	I0904 13:15:50.180910    4660 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/key.pem (1675 bytes)
	I0904 13:15:50.180955    4660 certs.go:484] found cert: /Users/jenkins/minikube-integration/19575-1140/.minikube/files/etc/ssl/certs/16612.pem (1708 bytes)
	I0904 13:15:50.181320    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 13:15:50.188293    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 13:15:50.195474    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 13:15:50.202424    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 13:15:50.209852    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0904 13:15:50.216989    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 13:15:50.223681    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 13:15:50.230508    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 13:15:50.237840    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/files/etc/ssl/certs/16612.pem --> /usr/share/ca-certificates/16612.pem (1708 bytes)
	I0904 13:15:50.244619    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 13:15:50.250919    4660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/1661.pem --> /usr/share/ca-certificates/1661.pem (1338 bytes)
	I0904 13:15:50.257990    4660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 13:15:50.263033    4660 ssh_runner.go:195] Run: openssl version
	I0904 13:15:50.265004    4660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 13:15:50.268073    4660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 13:15:50.269573    4660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0904 13:15:50.269603    4660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 13:15:50.271686    4660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 13:15:50.275282    4660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1661.pem && ln -fs /usr/share/ca-certificates/1661.pem /etc/ssl/certs/1661.pem"
	I0904 13:15:50.278370    4660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1661.pem
	I0904 13:15:50.279842    4660 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 19:41 /usr/share/ca-certificates/1661.pem
	I0904 13:15:50.279868    4660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1661.pem
	I0904 13:15:50.281568    4660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1661.pem /etc/ssl/certs/51391683.0"
	I0904 13:15:50.284387    4660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16612.pem && ln -fs /usr/share/ca-certificates/16612.pem /etc/ssl/certs/16612.pem"
	I0904 13:15:50.287401    4660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16612.pem
	I0904 13:15:50.289006    4660 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 19:41 /usr/share/ca-certificates/16612.pem
	I0904 13:15:50.289033    4660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16612.pem
	I0904 13:15:50.290714    4660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16612.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 13:15:50.293720    4660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 13:15:50.295052    4660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 13:15:50.296924    4660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 13:15:50.298689    4660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 13:15:50.300471    4660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 13:15:50.302203    4660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 13:15:50.303962    4660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 13:15:50.305690    4660 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-175000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50564 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-175000 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0904 13:15:50.305757    4660 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0904 13:15:50.316041    4660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 13:15:50.319149    4660 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0904 13:15:50.319155    4660 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0904 13:15:50.319174    4660 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0904 13:15:50.322598    4660 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0904 13:15:50.322897    4660 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-175000" does not appear in /Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:15:50.323005    4660 kubeconfig.go:62] /Users/jenkins/minikube-integration/19575-1140/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-175000" cluster setting kubeconfig missing "stopped-upgrade-175000" context setting]
	I0904 13:15:50.323202    4660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/kubeconfig: {Name:mk2a8055a803f1d023c814308503721b85f2130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:15:50.323647    4660 kapi.go:59] client config for stopped-upgrade-175000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/client.key", CAFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10217ff80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 13:15:50.323984    4660 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0904 13:15:50.326766    4660 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-175000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0904 13:15:50.326772    4660 kubeadm.go:1160] stopping kube-system containers ...
	I0904 13:15:50.326815    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0904 13:15:50.337594    4660 docker.go:483] Stopping containers: [6a33a036cd8e cf12e052d1ba b2ede15d553f 05c225f19632 bd580d1877e3 58f0be9a136f d7e09e7da4e6 89d367665f9b]
	I0904 13:15:50.337665    4660 ssh_runner.go:195] Run: docker stop 6a33a036cd8e cf12e052d1ba b2ede15d553f 05c225f19632 bd580d1877e3 58f0be9a136f d7e09e7da4e6 89d367665f9b
	I0904 13:15:50.348401    4660 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0904 13:15:50.354022    4660 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 13:15:50.357199    4660 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 13:15:50.357203    4660 kubeadm.go:157] found existing configuration files:
	
	I0904 13:15:50.357223    4660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/admin.conf
	I0904 13:15:50.359715    4660 kubeadm.go:163] "https://control-plane.minikube.internal:50564" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 13:15:50.359738    4660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 13:15:50.362515    4660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/kubelet.conf
	I0904 13:15:50.365518    4660 kubeadm.go:163] "https://control-plane.minikube.internal:50564" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 13:15:50.365538    4660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 13:15:50.368052    4660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/controller-manager.conf
	I0904 13:15:50.370763    4660 kubeadm.go:163] "https://control-plane.minikube.internal:50564" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 13:15:50.370792    4660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 13:15:50.373913    4660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/scheduler.conf
	I0904 13:15:50.376942    4660 kubeadm.go:163] "https://control-plane.minikube.internal:50564" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 13:15:50.376965    4660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 13:15:50.379427    4660 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 13:15:50.382566    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 13:15:50.405297    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 13:15:50.980414    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0904 13:15:51.106678    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 13:15:51.128256    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0904 13:15:51.152632    4660 api_server.go:52] waiting for apiserver process to appear ...
	I0904 13:15:51.152703    4660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:15:51.654779    4660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:15:52.154783    4660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:15:52.654764    4660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:15:52.659023    4660 api_server.go:72] duration metric: took 1.506412542s to wait for apiserver process to appear ...
	I0904 13:15:52.659046    4660 api_server.go:88] waiting for apiserver healthz status ...
	I0904 13:15:52.659079    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:15:57.661137    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:15:57.661197    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:02.661410    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:02.661438    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:07.661722    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:07.661785    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:12.662278    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:12.662300    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:17.662797    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:17.662829    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:22.663532    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:22.663558    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:27.664433    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:27.664458    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:32.665668    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:32.665740    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:37.667473    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:37.667512    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:42.668700    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:42.668719    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:47.670897    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:47.670968    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:16:52.673372    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:16:52.673623    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:16:52.692689    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:16:52.692816    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:16:52.713995    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:16:52.714088    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:16:52.725283    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:16:52.725378    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:16:52.735998    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:16:52.736080    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:16:52.746446    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:16:52.746514    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:16:52.756767    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:16:52.756846    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:16:52.768169    4660 logs.go:276] 0 containers: []
	W0904 13:16:52.768180    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:16:52.768250    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:16:52.778065    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:16:52.778092    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:16:52.778097    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:16:52.794252    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:16:52.794263    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:16:52.806015    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:16:52.806025    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:16:52.831974    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:16:52.831987    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:16:52.836275    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:16:52.836282    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:16:52.911575    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:16:52.911587    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:16:52.952136    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:16:52.952145    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:16:52.970513    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:16:52.970523    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:16:52.985957    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:16:52.985973    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:16:52.998083    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:16:52.998094    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:16:53.011348    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:16:53.011359    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:16:53.028659    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:16:53.028669    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:16:53.040419    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:16:53.040429    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:16:53.051262    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:16:53.051275    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:16:53.089159    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:16:53.089185    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:16:53.103875    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:16:53.103885    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:16:53.121560    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:16:53.121574    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:16:55.635998    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:00.638744    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:00.639435    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:00.676510    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:17:00.676646    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:00.695534    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:17:00.695641    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:00.709317    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:17:00.709399    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:00.721556    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:17:00.721628    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:00.732215    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:17:00.732285    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:00.743303    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:17:00.743374    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:00.759036    4660 logs.go:276] 0 containers: []
	W0904 13:17:00.759046    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:00.759101    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:00.773455    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:17:00.773474    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:17:00.773480    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:17:00.784646    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:00.784656    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:00.811120    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:00.811130    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:00.850418    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:17:00.850427    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:17:00.865773    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:17:00.865783    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:17:00.881325    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:17:00.881336    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:17:00.899027    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:17:00.899039    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:17:00.911656    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:17:00.911666    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:17:00.922805    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:17:00.922817    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:17:00.940402    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:17:00.940413    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:17:00.979655    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:17:00.979668    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:17:00.991161    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:17:00.991175    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:17:01.002828    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:01.002837    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:01.006881    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:01.006888    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:01.043505    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:17:01.043517    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:01.055419    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:17:01.055430    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:17:01.070456    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:17:01.070467    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:17:03.585234    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:08.587554    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:08.587758    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:08.613237    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:17:08.613334    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:08.632371    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:17:08.632449    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:08.644856    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:17:08.644920    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:08.660465    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:17:08.660535    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:08.670757    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:17:08.670815    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:08.681559    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:17:08.681618    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:08.691648    4660 logs.go:276] 0 containers: []
	W0904 13:17:08.691660    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:08.691716    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:08.704026    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:17:08.704044    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:17:08.704050    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:17:08.716252    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:17:08.716263    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:17:08.727457    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:17:08.727468    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:08.739067    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:17:08.739079    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:17:08.754855    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:17:08.754868    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:17:08.766359    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:17:08.766370    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:17:08.784513    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:08.784525    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:08.808534    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:08.808542    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:08.844903    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:08.844915    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:08.849129    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:17:08.849136    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:17:08.861136    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:17:08.861156    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:17:08.874986    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:17:08.875002    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:17:08.914676    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:17:08.914690    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:17:08.926512    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:17:08.926523    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:17:08.944199    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:17:08.944210    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:17:08.955835    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:08.955849    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:08.991335    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:17:08.991347    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:17:11.507461    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:16.508330    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:16.508479    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:16.525115    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:17:16.525204    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:16.538153    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:17:16.538224    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:16.549732    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:17:16.549807    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:16.560288    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:17:16.560361    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:16.570931    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:17:16.571001    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:16.581493    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:17:16.581556    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:16.592528    4660 logs.go:276] 0 containers: []
	W0904 13:17:16.592540    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:16.592599    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:16.602911    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:17:16.602929    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:17:16.602934    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:17:16.620279    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:17:16.620289    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:17:16.635826    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:17:16.635839    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:17:16.649854    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:17:16.649864    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:17:16.663467    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:17:16.663478    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:17:16.675379    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:17:16.675389    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:17:16.687266    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:17:16.687276    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:16.699025    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:17:16.699037    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:17:16.740394    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:17:16.740411    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:17:16.752335    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:17:16.752348    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:17:16.764178    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:16.764190    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:16.800881    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:16.800892    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:16.805122    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:16.805129    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:16.841794    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:17:16.841804    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:17:16.857283    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:17:16.857294    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:17:16.875003    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:17:16.875016    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:17:16.887450    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:16.887460    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:19.413294    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:24.415862    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:24.416039    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:24.431838    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:17:24.431909    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:24.442520    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:17:24.442591    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:24.460909    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:17:24.460984    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:24.472171    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:17:24.472246    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:24.482581    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:17:24.482645    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:24.493711    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:17:24.493792    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:24.504580    4660 logs.go:276] 0 containers: []
	W0904 13:17:24.504592    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:24.504658    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:24.519873    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:17:24.519891    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:24.519897    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:24.557189    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:17:24.557198    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:17:24.571866    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:17:24.571878    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:24.584199    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:17:24.584209    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:17:24.622328    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:17:24.622341    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:17:24.634618    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:17:24.634633    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:17:24.647233    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:24.647244    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:24.671689    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:17:24.671699    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:17:24.689677    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:17:24.689688    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:17:24.701779    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:17:24.701791    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:17:24.713121    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:24.713135    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:24.717897    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:24.717906    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:24.759922    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:17:24.759936    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:17:24.774453    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:17:24.774467    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:17:24.789738    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:17:24.789751    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:17:24.805582    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:17:24.805591    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:17:24.820390    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:17:24.820400    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:17:27.334117    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:32.336437    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:32.336747    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:32.371141    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:17:32.371266    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:32.393287    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:17:32.393404    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:32.408500    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:17:32.408573    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:32.421131    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:17:32.421207    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:32.432385    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:17:32.432457    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:32.443280    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:17:32.443342    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:32.453679    4660 logs.go:276] 0 containers: []
	W0904 13:17:32.453688    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:32.453738    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:32.464700    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:17:32.464724    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:17:32.464730    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:17:32.476986    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:17:32.476998    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:17:32.501630    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:17:32.501644    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:17:32.514510    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:17:32.514521    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:17:32.529835    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:17:32.529845    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:17:32.577540    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:17:32.577550    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:17:32.592708    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:17:32.592723    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:17:32.603636    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:17:32.603649    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:32.615203    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:32.615217    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:32.654063    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:32.654073    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:32.658817    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:32.658824    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:32.693218    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:17:32.693229    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:17:32.707643    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:17:32.707656    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:17:32.722154    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:17:32.722163    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:17:32.741873    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:17:32.741885    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:17:32.752888    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:17:32.752902    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:17:32.764694    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:32.764707    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:35.289033    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:40.291370    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:40.291523    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:40.311584    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:17:40.311685    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:40.326381    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:17:40.326460    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:40.338416    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:17:40.338490    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:40.349385    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:17:40.349455    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:40.359548    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:17:40.359618    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:40.374420    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:17:40.374490    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:40.384871    4660 logs.go:276] 0 containers: []
	W0904 13:17:40.384882    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:40.384941    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:40.395681    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:17:40.395698    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:17:40.395704    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:40.407741    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:40.407755    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:40.443109    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:40.443120    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:40.466705    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:17:40.466713    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:17:40.484447    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:17:40.484459    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:17:40.496058    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:17:40.496069    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:17:40.507656    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:17:40.507666    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:17:40.518755    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:17:40.518766    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:17:40.531752    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:40.531762    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:40.535897    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:17:40.535903    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:17:40.554565    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:17:40.554576    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:17:40.593007    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:17:40.593021    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:17:40.607191    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:17:40.607209    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:17:40.618724    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:17:40.618737    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:17:40.636296    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:17:40.636311    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:17:40.647740    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:40.647752    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:40.685626    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:17:40.685636    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:17:43.202578    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:48.203929    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:48.204171    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:48.223989    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:17:48.224092    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:48.237843    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:17:48.237921    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:48.252112    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:17:48.252180    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:48.262248    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:17:48.262320    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:48.273088    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:17:48.273155    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:48.284260    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:17:48.284332    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:48.293818    4660 logs.go:276] 0 containers: []
	W0904 13:17:48.293828    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:48.293887    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:48.305034    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:17:48.305052    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:48.305059    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:48.342325    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:17:48.342342    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:17:48.353499    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:17:48.353513    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:17:48.365070    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:17:48.365080    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:17:48.381184    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:17:48.381194    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:17:48.400084    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:17:48.400096    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:17:48.411609    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:48.411624    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:48.415715    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:48.415720    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:48.478359    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:17:48.478374    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:17:48.493397    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:17:48.493408    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:17:48.507566    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:17:48.507579    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:48.519360    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:17:48.519374    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:17:48.557966    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:17:48.557982    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:17:48.570322    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:17:48.570334    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:17:48.584795    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:17:48.584808    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:17:48.597136    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:17:48.597146    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:17:48.608588    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:48.608598    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:51.136420    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:17:56.138522    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:17:56.138651    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:17:56.150428    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:17:56.150535    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:17:56.161565    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:17:56.161633    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:17:56.172002    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:17:56.172073    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:17:56.186789    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:17:56.186864    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:17:56.198227    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:17:56.198296    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:17:56.209284    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:17:56.209352    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:17:56.219977    4660 logs.go:276] 0 containers: []
	W0904 13:17:56.219988    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:17:56.220051    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:17:56.230340    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:17:56.230360    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:17:56.230366    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:17:56.234691    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:17:56.234700    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:17:56.248313    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:17:56.248323    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:17:56.260194    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:17:56.260205    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:17:56.277533    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:17:56.277543    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:17:56.314536    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:17:56.314545    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:17:56.328634    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:17:56.328644    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:17:56.343426    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:17:56.343436    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:17:56.355240    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:17:56.355250    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:17:56.366503    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:17:56.366516    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:17:56.391769    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:17:56.391784    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:17:56.426776    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:17:56.426790    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:17:56.465856    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:17:56.465867    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:17:56.477974    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:17:56.477987    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:17:56.494231    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:17:56.494244    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:17:56.505842    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:17:56.505853    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:17:56.519769    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:17:56.519784    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:17:59.032675    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:04.034125    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:04.034415    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:04.062604    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:18:04.062736    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:04.080501    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:18:04.080592    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:04.093917    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:18:04.094009    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:04.105765    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:18:04.105839    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:04.116319    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:18:04.116391    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:04.141219    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:18:04.141298    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:04.151650    4660 logs.go:276] 0 containers: []
	W0904 13:18:04.151662    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:04.151721    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:04.163717    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:18:04.163739    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:04.163744    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:04.200394    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:18:04.200405    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:18:04.211772    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:18:04.211784    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:18:04.227051    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:18:04.227064    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:18:04.239013    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:18:04.239028    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:18:04.250192    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:18:04.250202    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:18:04.265109    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:04.265121    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:04.269364    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:18:04.269373    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:18:04.283141    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:18:04.283152    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:18:04.296928    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:18:04.296942    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:18:04.309010    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:04.309022    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:04.334633    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:18:04.334640    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:04.346270    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:04.346285    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:04.382392    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:18:04.382403    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:18:04.397484    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:18:04.397497    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:18:04.416785    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:18:04.416796    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:18:04.454647    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:18:04.454661    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:18:06.974742    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:11.976902    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:11.977143    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:12.002710    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:18:12.002834    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:12.018745    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:18:12.018826    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:12.038957    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:18:12.039026    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:12.049696    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:18:12.049771    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:12.060939    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:18:12.061004    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:12.071531    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:18:12.071603    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:12.081664    4660 logs.go:276] 0 containers: []
	W0904 13:18:12.081675    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:12.081730    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:12.092612    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:18:12.092628    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:12.092633    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:12.132062    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:18:12.132077    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:18:12.146193    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:18:12.146206    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:18:12.157686    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:12.157697    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:12.183607    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:18:12.183625    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:18:12.195404    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:18:12.195417    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:18:12.207651    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:18:12.207667    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:18:12.223981    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:12.223993    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:12.228422    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:18:12.228429    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:18:12.267311    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:18:12.267329    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:18:12.281456    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:18:12.281484    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:18:12.296608    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:18:12.296622    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:18:12.313798    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:18:12.313810    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:18:12.325385    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:18:12.325398    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:12.337504    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:12.337515    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:12.373455    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:18:12.373470    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:18:12.388693    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:18:12.388705    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:18:14.900813    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:19.903063    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:19.903307    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:19.928501    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:18:19.928612    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:19.945621    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:18:19.945701    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:19.960081    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:18:19.960163    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:19.971368    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:18:19.971433    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:19.981663    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:18:19.981741    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:19.992376    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:18:19.992447    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:20.002633    4660 logs.go:276] 0 containers: []
	W0904 13:18:20.002643    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:20.002702    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:20.013358    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:18:20.013376    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:20.013382    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:20.049659    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:18:20.049677    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:18:20.064248    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:18:20.064262    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:18:20.075794    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:18:20.075812    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:18:20.093212    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:18:20.093227    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:18:20.105331    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:20.105343    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:20.110064    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:18:20.110074    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:18:20.124553    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:18:20.124563    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:20.137110    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:18:20.137121    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:18:20.152307    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:18:20.152318    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:18:20.167614    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:18:20.167624    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:18:20.182923    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:18:20.182935    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:18:20.198549    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:20.198558    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:20.224102    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:20.224124    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:20.263724    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:18:20.263744    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:18:20.311865    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:18:20.311878    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:18:20.324299    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:18:20.324310    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:18:22.837874    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:27.840101    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:27.840298    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:27.861084    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:18:27.861175    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:27.876319    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:18:27.876398    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:27.888892    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:18:27.888971    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:27.901903    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:18:27.901982    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:27.914122    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:18:27.914196    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:27.924864    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:18:27.924935    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:27.935166    4660 logs.go:276] 0 containers: []
	W0904 13:18:27.935177    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:27.935236    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:27.945754    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:18:27.945773    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:18:27.945778    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:18:27.957440    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:27.957451    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:27.980800    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:18:27.980808    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:27.993019    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:27.993029    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:27.997652    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:18:27.997659    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:18:28.015806    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:18:28.015817    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:18:28.027574    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:18:28.027584    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:18:28.039672    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:18:28.039683    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:18:28.050798    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:18:28.050809    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:18:28.064801    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:18:28.064812    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:18:28.076611    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:28.076625    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:28.115230    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:18:28.115240    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:18:28.130231    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:18:28.130240    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:18:28.168085    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:18:28.168096    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:18:28.182392    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:18:28.182401    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:18:28.194174    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:28.194184    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:28.230204    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:18:28.230216    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:18:30.745145    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:35.747447    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:35.747789    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:35.779562    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:18:35.779685    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:35.798908    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:18:35.799004    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:35.813148    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:18:35.813227    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:35.825567    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:18:35.825642    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:35.836935    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:18:35.837001    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:35.847831    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:18:35.847904    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:35.858325    4660 logs.go:276] 0 containers: []
	W0904 13:18:35.858337    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:35.858397    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:35.869662    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:18:35.869679    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:18:35.869685    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:18:35.884176    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:18:35.884186    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:18:35.900275    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:18:35.900287    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:18:35.911709    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:18:35.911719    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:18:35.923407    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:35.923422    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:35.961277    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:35.961285    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:35.966010    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:35.966018    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:36.001331    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:18:36.001341    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:18:36.015383    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:18:36.015394    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:18:36.028485    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:18:36.028495    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:18:36.039746    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:18:36.039757    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:18:36.054604    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:18:36.054619    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:18:36.070321    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:18:36.070331    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:18:36.108473    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:18:36.108484    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:18:36.123466    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:18:36.123478    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:18:36.140817    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:36.140828    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:36.164230    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:18:36.164236    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:38.679571    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:43.681776    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:43.682122    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:43.713207    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:18:43.713348    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:43.733206    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:18:43.733315    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:43.747647    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:18:43.747713    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:43.759295    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:18:43.759368    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:43.769755    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:18:43.769828    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:43.780637    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:18:43.780707    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:43.791339    4660 logs.go:276] 0 containers: []
	W0904 13:18:43.791350    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:43.791408    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:43.802694    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:18:43.802713    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:18:43.802720    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:18:43.818194    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:18:43.818205    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:18:43.833700    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:18:43.833709    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:18:43.851671    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:43.851682    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:43.874727    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:43.874740    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:43.878986    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:18:43.878994    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:18:43.892684    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:18:43.892693    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:18:43.907385    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:18:43.907395    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:18:43.921343    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:43.921354    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:43.957510    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:18:43.957520    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:18:43.969314    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:18:43.969324    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:18:43.981297    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:18:43.981308    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:18:43.992875    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:18:43.992887    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:44.004901    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:44.004912    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:44.041751    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:18:44.041761    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:18:44.083191    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:18:44.083201    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:18:44.094940    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:18:44.094954    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:18:46.608286    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:51.610503    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:51.610633    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:51.623632    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:18:51.623707    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:51.638782    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:18:51.638854    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:51.648777    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:18:51.648840    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:51.658927    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:18:51.658997    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:51.669519    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:18:51.669583    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:51.680251    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:18:51.680323    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:51.690238    4660 logs.go:276] 0 containers: []
	W0904 13:18:51.690250    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:51.690312    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:51.701744    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:18:51.701763    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:51.701769    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:51.740434    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:18:51.740444    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:18:51.758681    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:18:51.758691    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:51.771321    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:18:51.771333    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:18:51.785645    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:18:51.785655    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:18:51.824033    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:18:51.824058    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:18:51.835657    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:18:51.835667    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:18:51.847870    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:51.847883    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:51.852020    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:18:51.852026    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:18:51.868351    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:18:51.868368    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:18:51.884433    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:18:51.884450    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:18:51.900057    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:51.900067    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:51.935933    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:18:51.935947    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:18:51.951404    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:18:51.951417    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:18:51.962333    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:18:51.962346    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:18:51.976732    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:18:51.976743    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:18:51.988018    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:51.988032    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:18:54.513928    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:18:59.516220    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:18:59.516471    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:18:59.532830    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:18:59.532927    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:18:59.546189    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:18:59.546257    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:18:59.557166    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:18:59.557240    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:18:59.571322    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:18:59.571390    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:18:59.582046    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:18:59.582112    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:18:59.593194    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:18:59.593261    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:18:59.603671    4660 logs.go:276] 0 containers: []
	W0904 13:18:59.603689    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:18:59.603749    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:18:59.614508    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:18:59.614526    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:18:59.614531    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:18:59.632121    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:18:59.632133    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:18:59.643401    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:18:59.643415    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:18:59.678328    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:18:59.678342    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:18:59.692584    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:18:59.692594    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:18:59.732153    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:18:59.732165    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:18:59.744206    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:18:59.744216    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:18:59.756163    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:18:59.756174    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:18:59.769298    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:18:59.769309    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:18:59.784920    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:18:59.784932    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:18:59.823050    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:18:59.823059    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:18:59.837670    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:18:59.837679    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:18:59.848960    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:18:59.848973    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:18:59.863629    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:18:59.863640    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:18:59.875520    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:18:59.875532    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:18:59.879498    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:18:59.879505    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:18:59.894130    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:18:59.894140    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:02.420695    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:07.422971    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:07.423112    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:07.436869    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:19:07.436940    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:07.451553    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:19:07.451625    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:07.461939    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:19:07.462016    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:07.472865    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:19:07.472938    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:07.483609    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:19:07.483682    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:07.494626    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:19:07.494696    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:07.505053    4660 logs.go:276] 0 containers: []
	W0904 13:19:07.505064    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:07.505122    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:07.524026    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:19:07.524043    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:07.524050    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:07.528393    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:19:07.528400    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:19:07.565483    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:19:07.565496    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:19:07.577287    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:19:07.577301    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:19:07.595955    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:19:07.595966    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:19:07.608317    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:07.608328    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:07.645033    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:19:07.645047    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:19:07.659289    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:19:07.659299    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:19:07.673406    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:19:07.673416    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:19:07.684237    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:07.684249    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:07.708681    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:19:07.708691    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:07.720953    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:19:07.720967    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:19:07.735747    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:19:07.735757    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:19:07.752994    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:07.753005    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:07.790552    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:19:07.790561    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:19:07.804409    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:19:07.804419    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:19:07.815992    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:19:07.816006    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:19:10.328788    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:15.330063    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:15.330295    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:15.350291    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:19:15.350381    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:15.364972    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:19:15.365046    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:15.377488    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:19:15.377561    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:15.387984    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:19:15.388056    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:15.398334    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:19:15.398395    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:15.410117    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:19:15.410177    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:15.420598    4660 logs.go:276] 0 containers: []
	W0904 13:19:15.420609    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:15.420669    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:15.431245    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:19:15.431260    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:19:15.431265    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:19:15.447556    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:19:15.447570    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:19:15.459407    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:19:15.459417    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:19:15.471742    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:19:15.471753    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:15.483686    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:15.483700    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:15.521929    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:19:15.521939    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:19:15.542912    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:19:15.542925    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:19:15.582289    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:19:15.582300    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:19:15.597224    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:19:15.597235    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:19:15.614755    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:19:15.614767    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:19:15.626759    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:15.626772    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:15.631898    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:19:15.631907    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:19:15.656649    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:19:15.656666    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:19:15.670207    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:19:15.670221    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:19:15.684178    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:19:15.684194    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:19:15.695289    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:15.695299    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:15.717345    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:15.717353    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:18.255778    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:23.258124    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:23.258419    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:23.287767    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:19:23.287891    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:23.305230    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:19:23.305320    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:23.318784    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:19:23.318860    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:23.331409    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:19:23.331475    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:23.342078    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:19:23.342150    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:23.352777    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:19:23.352852    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:23.362973    4660 logs.go:276] 0 containers: []
	W0904 13:19:23.362983    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:23.363041    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:23.373387    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:19:23.373405    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:19:23.373410    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:23.385231    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:23.385243    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:23.423956    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:19:23.423965    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:19:23.437876    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:19:23.437887    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:19:23.449878    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:19:23.449888    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:19:23.467347    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:19:23.467357    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:19:23.478760    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:23.478770    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:23.514099    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:19:23.514110    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:19:23.552687    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:19:23.552699    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:19:23.567342    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:19:23.567352    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:19:23.579043    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:19:23.579055    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:19:23.590364    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:23.590374    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:23.594561    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:19:23.594571    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:19:23.617005    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:19:23.617016    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:19:23.631410    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:23.631423    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:23.654210    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:19:23.654221    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:19:23.668140    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:19:23.668152    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:19:26.181459    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:31.184124    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:31.184405    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:31.219641    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:19:31.219787    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:31.238011    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:19:31.238116    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:31.251511    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:19:31.251608    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:31.267302    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:19:31.267382    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:31.277814    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:19:31.277881    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:31.293276    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:19:31.293349    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:31.304537    4660 logs.go:276] 0 containers: []
	W0904 13:19:31.304549    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:31.304610    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:31.315718    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:19:31.315735    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:19:31.315741    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:19:31.353642    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:19:31.353655    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:19:31.368477    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:19:31.368489    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:19:31.383318    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:19:31.383333    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:31.394698    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:31.394712    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:31.432644    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:31.432657    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:31.468558    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:19:31.468570    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:19:31.488456    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:31.488466    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:31.513204    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:19:31.513214    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:19:31.534208    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:19:31.534218    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:19:31.545997    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:19:31.546008    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:19:31.558275    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:19:31.558286    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:19:31.573071    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:19:31.573084    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:19:31.585387    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:19:31.585399    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:19:31.604135    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:31.604147    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:31.608823    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:19:31.608831    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:19:31.623263    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:19:31.623273    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:19:34.137548    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:39.140033    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:39.140276    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:39.162623    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:19:39.162724    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:39.178558    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:19:39.178641    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:39.192659    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:19:39.192733    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:39.211288    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:19:39.211356    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:39.222379    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:19:39.222453    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:39.233053    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:19:39.233127    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:39.243304    4660 logs.go:276] 0 containers: []
	W0904 13:19:39.243314    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:39.243372    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:39.255266    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:19:39.255284    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:39.255291    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:39.259387    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:39.259396    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:39.295553    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:19:39.295563    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:19:39.307561    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:19:39.307572    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:19:39.323111    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:19:39.323124    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:19:39.336048    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:39.336059    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:39.374850    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:19:39.374859    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:19:39.422266    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:19:39.422277    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:19:39.436402    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:19:39.436412    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:19:39.447276    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:19:39.447288    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:19:39.465005    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:39.465016    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:39.488225    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:19:39.488237    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:19:39.508659    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:19:39.508671    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:19:39.520205    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:19:39.520216    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:39.532553    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:19:39.532565    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:19:39.546722    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:19:39.546732    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:19:39.558930    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:19:39.558941    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:19:42.072384    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:47.074656    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:47.074744    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:19:47.085482    4660 logs.go:276] 2 containers: [a2ce3feba4c3 b2ede15d553f]
	I0904 13:19:47.085545    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:19:47.098088    4660 logs.go:276] 2 containers: [79f387979643 58f0be9a136f]
	I0904 13:19:47.098153    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:19:47.108898    4660 logs.go:276] 1 containers: [8985f7ddf6fc]
	I0904 13:19:47.108970    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:19:47.119275    4660 logs.go:276] 2 containers: [691954e81b9d bd580d1877e3]
	I0904 13:19:47.119343    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:19:47.129720    4660 logs.go:276] 1 containers: [8f1e71371b5d]
	I0904 13:19:47.129781    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:19:47.140501    4660 logs.go:276] 2 containers: [2e8fd199897c 6a33a036cd8e]
	I0904 13:19:47.140562    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:19:47.150272    4660 logs.go:276] 0 containers: []
	W0904 13:19:47.150289    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:19:47.150352    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:19:47.161386    4660 logs.go:276] 2 containers: [961899a85b2c 274def6e44bc]
	I0904 13:19:47.161410    4660 logs.go:123] Gathering logs for kube-scheduler [bd580d1877e3] ...
	I0904 13:19:47.161416    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd580d1877e3"
	I0904 13:19:47.177192    4660 logs.go:123] Gathering logs for kube-proxy [8f1e71371b5d] ...
	I0904 13:19:47.177203    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f1e71371b5d"
	I0904 13:19:47.188827    4660 logs.go:123] Gathering logs for kube-controller-manager [2e8fd199897c] ...
	I0904 13:19:47.188839    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e8fd199897c"
	I0904 13:19:47.207895    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:19:47.207906    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:19:47.212365    4660 logs.go:123] Gathering logs for kube-apiserver [b2ede15d553f] ...
	I0904 13:19:47.212371    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2ede15d553f"
	I0904 13:19:47.249501    4660 logs.go:123] Gathering logs for coredns [8985f7ddf6fc] ...
	I0904 13:19:47.249511    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8985f7ddf6fc"
	I0904 13:19:47.264334    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:19:47.264345    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:19:47.288037    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:19:47.288047    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:19:47.326786    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:19:47.326798    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:19:47.362420    4660 logs.go:123] Gathering logs for kube-apiserver [a2ce3feba4c3] ...
	I0904 13:19:47.362432    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2ce3feba4c3"
	I0904 13:19:47.377334    4660 logs.go:123] Gathering logs for kube-scheduler [691954e81b9d] ...
	I0904 13:19:47.377344    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 691954e81b9d"
	I0904 13:19:47.389575    4660 logs.go:123] Gathering logs for storage-provisioner [961899a85b2c] ...
	I0904 13:19:47.389587    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 961899a85b2c"
	I0904 13:19:47.403997    4660 logs.go:123] Gathering logs for etcd [79f387979643] ...
	I0904 13:19:47.404010    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79f387979643"
	I0904 13:19:47.417306    4660 logs.go:123] Gathering logs for etcd [58f0be9a136f] ...
	I0904 13:19:47.417316    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58f0be9a136f"
	I0904 13:19:47.431769    4660 logs.go:123] Gathering logs for kube-controller-manager [6a33a036cd8e] ...
	I0904 13:19:47.431779    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a33a036cd8e"
	I0904 13:19:47.444404    4660 logs.go:123] Gathering logs for storage-provisioner [274def6e44bc] ...
	I0904 13:19:47.444414    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 274def6e44bc"
	I0904 13:19:47.455875    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:19:47.455887    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:19:49.970090    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:19:54.972285    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:19:54.972341    4660 kubeadm.go:597] duration metric: took 4m4.657293625s to restartPrimaryControlPlane
	W0904 13:19:54.972401    4660 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0904 13:19:54.972427    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0904 13:19:55.973814    4660 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.001386458s)
	I0904 13:19:55.973892    4660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 13:19:55.978998    4660 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 13:19:55.981833    4660 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 13:19:55.984880    4660 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 13:19:55.984887    4660 kubeadm.go:157] found existing configuration files:
	
	I0904 13:19:55.984911    4660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/admin.conf
	I0904 13:19:55.987907    4660 kubeadm.go:163] "https://control-plane.minikube.internal:50564" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 13:19:55.987929    4660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 13:19:55.990708    4660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/kubelet.conf
	I0904 13:19:55.993415    4660 kubeadm.go:163] "https://control-plane.minikube.internal:50564" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 13:19:55.993441    4660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 13:19:55.996807    4660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/controller-manager.conf
	I0904 13:19:55.999854    4660 kubeadm.go:163] "https://control-plane.minikube.internal:50564" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 13:19:55.999879    4660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 13:19:56.002472    4660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/scheduler.conf
	I0904 13:19:56.005111    4660 kubeadm.go:163] "https://control-plane.minikube.internal:50564" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50564 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 13:19:56.005137    4660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 13:19:56.008356    4660 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0904 13:19:56.029219    4660 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0904 13:19:56.029281    4660 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 13:19:56.088411    4660 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 13:19:56.088470    4660 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 13:19:56.088527    4660 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0904 13:19:56.141359    4660 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 13:19:56.149599    4660 out.go:235]   - Generating certificates and keys ...
	I0904 13:19:56.149634    4660 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 13:19:56.149663    4660 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 13:19:56.149701    4660 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0904 13:19:56.149735    4660 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0904 13:19:56.149774    4660 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0904 13:19:56.149805    4660 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0904 13:19:56.149833    4660 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0904 13:19:56.149861    4660 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0904 13:19:56.149905    4660 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0904 13:19:56.149960    4660 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0904 13:19:56.149980    4660 kubeadm.go:310] [certs] Using the existing "sa" key
	I0904 13:19:56.150012    4660 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 13:19:56.272152    4660 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 13:19:56.320860    4660 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 13:19:56.515671    4660 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 13:19:56.764096    4660 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 13:19:56.793089    4660 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 13:19:56.793744    4660 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 13:19:56.793941    4660 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 13:19:56.865170    4660 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 13:19:56.868357    4660 out.go:235]   - Booting up control plane ...
	I0904 13:19:56.868469    4660 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 13:19:56.868508    4660 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 13:19:56.868594    4660 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 13:19:56.868744    4660 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 13:19:56.869214    4660 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0904 13:20:00.871351    4660 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002552 seconds
	I0904 13:20:00.871470    4660 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 13:20:00.875268    4660 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 13:20:01.389303    4660 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 13:20:01.389548    4660 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-175000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 13:20:01.895585    4660 kubeadm.go:310] [bootstrap-token] Using token: e43l1m.2immplqdgm4q9v3p
	I0904 13:20:01.901862    4660 out.go:235]   - Configuring RBAC rules ...
	I0904 13:20:01.901924    4660 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 13:20:01.901973    4660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 13:20:01.907071    4660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 13:20:01.907842    4660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 13:20:01.908635    4660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 13:20:01.909470    4660 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 13:20:01.912673    4660 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 13:20:02.079797    4660 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 13:20:02.299813    4660 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 13:20:02.300272    4660 kubeadm.go:310] 
	I0904 13:20:02.300303    4660 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 13:20:02.300306    4660 kubeadm.go:310] 
	I0904 13:20:02.300348    4660 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 13:20:02.300351    4660 kubeadm.go:310] 
	I0904 13:20:02.300375    4660 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 13:20:02.300408    4660 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 13:20:02.300436    4660 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 13:20:02.300440    4660 kubeadm.go:310] 
	I0904 13:20:02.300464    4660 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 13:20:02.300469    4660 kubeadm.go:310] 
	I0904 13:20:02.300493    4660 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 13:20:02.300496    4660 kubeadm.go:310] 
	I0904 13:20:02.300531    4660 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 13:20:02.300574    4660 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 13:20:02.300618    4660 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 13:20:02.300623    4660 kubeadm.go:310] 
	I0904 13:20:02.300667    4660 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 13:20:02.300710    4660 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 13:20:02.300713    4660 kubeadm.go:310] 
	I0904 13:20:02.300771    4660 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e43l1m.2immplqdgm4q9v3p \
	I0904 13:20:02.300823    4660 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3feb851b3bc39caa9868530b83b064422b69401534f2eff748003ac6b1086498 \
	I0904 13:20:02.300834    4660 kubeadm.go:310] 	--control-plane 
	I0904 13:20:02.300838    4660 kubeadm.go:310] 
	I0904 13:20:02.300879    4660 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 13:20:02.300884    4660 kubeadm.go:310] 
	I0904 13:20:02.300925    4660 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e43l1m.2immplqdgm4q9v3p \
	I0904 13:20:02.301016    4660 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3feb851b3bc39caa9868530b83b064422b69401534f2eff748003ac6b1086498 
	I0904 13:20:02.301082    4660 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 13:20:02.301108    4660 cni.go:84] Creating CNI manager for ""
	I0904 13:20:02.301118    4660 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:20:02.304829    4660 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0904 13:20:02.312768    4660 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0904 13:20:02.316172    4660 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0904 13:20:02.321378    4660 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 13:20:02.321460    4660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-175000 minikube.k8s.io/updated_at=2024_09_04T13_20_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af minikube.k8s.io/name=stopped-upgrade-175000 minikube.k8s.io/primary=true
	I0904 13:20:02.321460    4660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 13:20:02.334183    4660 ops.go:34] apiserver oom_adj: -16
	I0904 13:20:02.365257    4660 kubeadm.go:1113] duration metric: took 43.856541ms to wait for elevateKubeSystemPrivileges
	I0904 13:20:02.365269    4660 kubeadm.go:394] duration metric: took 4m12.063818708s to StartCluster
	I0904 13:20:02.365280    4660 settings.go:142] acquiring lock: {Name:mk9e5d70c30d2e6b96e7a9eeb7ab14f5f9a1127e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:20:02.365369    4660 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:20:02.365802    4660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/kubeconfig: {Name:mk2a8055a803f1d023c814308503721b85f2130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:20:02.366005    4660 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:20:02.366038    4660 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 13:20:02.366077    4660 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-175000"
	I0904 13:20:02.366090    4660 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-175000"
	W0904 13:20:02.366094    4660 addons.go:243] addon storage-provisioner should already be in state true
	I0904 13:20:02.366094    4660 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-175000"
	I0904 13:20:02.366107    4660 host.go:66] Checking if "stopped-upgrade-175000" exists ...
	I0904 13:20:02.366109    4660 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-175000"
	I0904 13:20:02.366128    4660 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:20:02.366505    4660 retry.go:31] will retry after 648.253957ms: connect: dial unix /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/monitor: connect: connection refused
	I0904 13:20:02.367235    4660 kapi.go:59] client config for stopped-upgrade-175000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/stopped-upgrade-175000/client.key", CAFile:"/Users/jenkins/minikube-integration/19575-1140/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10217ff80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 13:20:02.367351    4660 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-175000"
	W0904 13:20:02.367356    4660 addons.go:243] addon default-storageclass should already be in state true
	I0904 13:20:02.367368    4660 host.go:66] Checking if "stopped-upgrade-175000" exists ...
	I0904 13:20:02.367881    4660 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 13:20:02.367886    4660 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 13:20:02.367891    4660 sshutil.go:53] new ssh client: &{IP:localhost Port:50529 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/id_rsa Username:docker}
	I0904 13:20:02.369770    4660 out.go:177] * Verifying Kubernetes components...
	I0904 13:20:02.376750    4660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 13:20:02.452948    4660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 13:20:02.458313    4660 api_server.go:52] waiting for apiserver process to appear ...
	I0904 13:20:02.458347    4660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 13:20:02.462245    4660 api_server.go:72] duration metric: took 96.231208ms to wait for apiserver process to appear ...
	I0904 13:20:02.462252    4660 api_server.go:88] waiting for apiserver healthz status ...
	I0904 13:20:02.462258    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:02.520369    4660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 13:20:02.840626    4660 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0904 13:20:02.840639    4660 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0904 13:20:03.021622    4660 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 13:20:03.025596    4660 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 13:20:03.025603    4660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 13:20:03.025612    4660 sshutil.go:53] new ssh client: &{IP:localhost Port:50529 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/stopped-upgrade-175000/id_rsa Username:docker}
	I0904 13:20:03.061941    4660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 13:20:07.464329    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:07.464357    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:12.464983    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:12.465003    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:17.465332    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:17.465387    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:22.465878    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:22.465942    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:27.466618    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:27.466669    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:32.467531    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:32.467578    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0904 13:20:32.842147    4660 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0904 13:20:32.845402    4660 out.go:177] * Enabled addons: storage-provisioner
	I0904 13:20:32.854404    4660 addons.go:510] duration metric: took 30.488889083s for enable addons: enabled=[storage-provisioner]
	I0904 13:20:37.468688    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:37.468739    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:42.470197    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:42.470250    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:47.472139    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:47.472186    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:52.474360    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:52.474386    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:20:57.475492    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:20:57.475515    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:21:02.477660    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:21:02.477899    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:21:02.513842    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:21:02.513932    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:21:02.537157    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:21:02.537228    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:21:02.548324    4660 logs.go:276] 2 containers: [ce21979776a3 fd6fc2bac646]
	I0904 13:21:02.548385    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:21:02.558506    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:21:02.558571    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:21:02.568584    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:21:02.568644    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:21:02.580452    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:21:02.580520    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:21:02.590814    4660 logs.go:276] 0 containers: []
	W0904 13:21:02.590828    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:21:02.590887    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:21:02.601256    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:21:02.601270    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:21:02.601276    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:21:02.613742    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:21:02.613755    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:21:02.638823    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:21:02.638834    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:21:02.672506    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:21:02.672516    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:21:02.688561    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:21:02.688570    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:21:02.702180    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:21:02.702192    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:21:02.713734    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:21:02.713749    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:21:02.725687    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:21:02.725700    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:21:02.736923    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:21:02.736936    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:21:02.741181    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:21:02.741191    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:21:02.775538    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:21:02.775553    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:21:02.788248    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:21:02.788261    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:21:02.802662    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:21:02.802674    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:21:05.322497    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:21:10.323754    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:21:10.323945    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:21:10.344051    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:21:10.344137    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:21:10.358552    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:21:10.358627    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:21:10.371096    4660 logs.go:276] 2 containers: [ce21979776a3 fd6fc2bac646]
	I0904 13:21:10.371158    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:21:10.381702    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:21:10.381767    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:21:10.392193    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:21:10.392263    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:21:10.402710    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:21:10.402777    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:21:10.413143    4660 logs.go:276] 0 containers: []
	W0904 13:21:10.413153    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:21:10.413204    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:21:10.423110    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:21:10.423128    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:21:10.423134    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:21:10.437564    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:21:10.437578    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:21:10.448901    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:21:10.448910    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:21:10.483899    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:21:10.483906    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:21:10.488134    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:21:10.488140    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:21:10.525114    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:21:10.525126    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:21:10.539612    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:21:10.539625    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:21:10.554365    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:21:10.554377    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:21:10.565847    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:21:10.565856    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:21:10.589773    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:21:10.589783    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:21:10.601329    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:21:10.601342    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:21:10.618797    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:21:10.618807    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:21:10.630533    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:21:10.630543    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:21:13.143336    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:21:18.146025    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:21:18.146455    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:21:18.196758    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:21:18.196869    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:21:18.214054    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:21:18.214139    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:21:18.226820    4660 logs.go:276] 2 containers: [ce21979776a3 fd6fc2bac646]
	I0904 13:21:18.226879    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:21:18.237668    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:21:18.237736    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:21:18.252441    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:21:18.252511    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:21:18.263154    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:21:18.263214    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:21:18.273412    4660 logs.go:276] 0 containers: []
	W0904 13:21:18.273424    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:21:18.273477    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:21:18.284266    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:21:18.284282    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:21:18.284287    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:21:18.295499    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:21:18.295514    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:21:18.308226    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:21:18.308238    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:21:18.325216    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:21:18.325227    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:21:18.329512    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:21:18.329521    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:21:18.364733    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:21:18.364743    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:21:18.380154    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:21:18.380165    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:21:18.395475    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:21:18.395487    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:21:18.406836    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:21:18.406850    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:21:18.433147    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:21:18.433156    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:21:18.444980    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:21:18.444992    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:21:18.477756    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:21:18.477762    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:21:18.492732    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:21:18.492742    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:21:21.004771    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:21:26.007472    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:21:26.007916    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:21:26.046385    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:21:26.046528    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:21:26.068845    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:21:26.068974    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:21:26.084081    4660 logs.go:276] 2 containers: [ce21979776a3 fd6fc2bac646]
	I0904 13:21:26.084149    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:21:26.096806    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:21:26.096883    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:21:26.107897    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:21:26.107969    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:21:26.124627    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:21:26.124691    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:21:26.135088    4660 logs.go:276] 0 containers: []
	W0904 13:21:26.135097    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:21:26.135152    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:21:26.145885    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:21:26.145900    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:21:26.145905    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:21:26.157429    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:21:26.157441    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:21:26.169580    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:21:26.169593    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:21:26.181441    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:21:26.181452    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:21:26.198930    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:21:26.198942    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:21:26.219718    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:21:26.219730    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:21:26.231304    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:21:26.231314    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:21:26.243173    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:21:26.243183    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:21:26.257969    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:21:26.257980    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:21:26.276603    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:21:26.276612    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:21:26.300364    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:21:26.300375    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:21:26.332931    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:21:26.332937    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:21:26.336845    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:21:26.336851    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:21:28.872762    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:21:33.875178    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:21:33.875645    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:21:33.918757    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:21:33.918884    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:21:33.940115    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:21:33.940233    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:21:33.955091    4660 logs.go:276] 2 containers: [ce21979776a3 fd6fc2bac646]
	I0904 13:21:33.955167    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:21:33.969865    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:21:33.969923    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:21:33.980943    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:21:33.981017    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:21:33.991367    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:21:33.991435    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:21:34.002131    4660 logs.go:276] 0 containers: []
	W0904 13:21:34.002144    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:21:34.002200    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:21:34.012677    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:21:34.012696    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:21:34.012701    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:21:34.036112    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:21:34.036120    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:21:34.047036    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:21:34.047051    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:21:34.085349    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:21:34.085359    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:21:34.100610    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:21:34.100623    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:21:34.114474    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:21:34.114484    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:21:34.126127    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:21:34.126141    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:21:34.142239    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:21:34.142250    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:21:34.156689    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:21:34.156700    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:21:34.192079    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:21:34.192090    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:21:34.197489    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:21:34.197500    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:21:34.208701    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:21:34.208713    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:21:34.220011    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:21:34.220025    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:21:36.736983    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:21:41.739684    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:21:41.740679    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:21:41.778112    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:21:41.778251    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:21:41.799882    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:21:41.799993    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:21:41.815021    4660 logs.go:276] 2 containers: [ce21979776a3 fd6fc2bac646]
	I0904 13:21:41.815097    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:21:41.827694    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:21:41.827768    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:21:41.838068    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:21:41.838132    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:21:41.848595    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:21:41.848669    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:21:41.858255    4660 logs.go:276] 0 containers: []
	W0904 13:21:41.858268    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:21:41.858318    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:21:41.869545    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:21:41.869559    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:21:41.869564    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:21:41.905488    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:21:41.905502    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:21:41.930258    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:21:41.930264    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:21:41.934244    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:21:41.934249    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:21:41.948277    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:21:41.948287    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:21:41.962286    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:21:41.962300    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:21:41.974934    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:21:41.974948    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:21:41.986587    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:21:41.986600    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:21:42.001237    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:21:42.001247    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:21:42.012487    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:21:42.012498    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:21:42.030147    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:21:42.030160    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:21:42.063815    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:21:42.063822    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:21:42.075312    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:21:42.075324    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:21:44.589772    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:21:49.592246    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:21:49.592705    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:21:49.635849    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:21:49.635997    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:21:49.659487    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:21:49.659587    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:21:49.674282    4660 logs.go:276] 2 containers: [ce21979776a3 fd6fc2bac646]
	I0904 13:21:49.674352    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:21:49.686956    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:21:49.687020    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:21:49.697411    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:21:49.697487    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:21:49.707802    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:21:49.707863    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:21:49.718697    4660 logs.go:276] 0 containers: []
	W0904 13:21:49.718709    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:21:49.718766    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:21:49.729779    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:21:49.729794    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:21:49.729802    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:21:49.744112    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:21:49.744126    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:21:49.758758    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:21:49.758772    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:21:49.770328    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:21:49.770338    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:21:49.787365    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:21:49.787377    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:21:49.799933    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:21:49.799943    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:21:49.817449    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:21:49.817459    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:21:49.828677    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:21:49.828686    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:21:49.863528    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:21:49.863535    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:21:49.867858    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:21:49.867867    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:21:49.905841    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:21:49.905852    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:21:49.917795    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:21:49.917805    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:21:49.941634    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:21:49.941642    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:21:52.454762    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:21:57.457139    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:21:57.457533    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:21:57.495806    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:21:57.495944    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:21:57.520619    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:21:57.520707    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:21:57.534379    4660 logs.go:276] 2 containers: [ce21979776a3 fd6fc2bac646]
	I0904 13:21:57.534444    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:21:57.546098    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:21:57.546166    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:21:57.556808    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:21:57.556876    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:21:57.567352    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:21:57.567417    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:21:57.577659    4660 logs.go:276] 0 containers: []
	W0904 13:21:57.577671    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:21:57.577727    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:21:57.588282    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:21:57.588297    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:21:57.588302    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:21:57.611380    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:21:57.611390    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:21:57.622520    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:21:57.622533    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:21:57.637500    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:21:57.637513    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:21:57.651654    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:21:57.651670    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:21:57.677057    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:21:57.677066    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:21:57.689072    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:21:57.689085    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:21:57.700543    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:21:57.700556    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:21:57.715139    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:21:57.715152    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:21:57.726781    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:21:57.726795    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:21:57.738415    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:21:57.738429    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:21:57.772317    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:21:57.772328    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:21:57.776363    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:21:57.776371    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:22:00.319141    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:22:05.321411    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:22:05.321592    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:22:05.335502    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:22:05.335566    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:22:05.346812    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:22:05.346878    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:22:05.358189    4660 logs.go:276] 2 containers: [ce21979776a3 fd6fc2bac646]
	I0904 13:22:05.358256    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:22:05.373296    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:22:05.373347    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:22:05.393670    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:22:05.393731    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:22:05.409153    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:22:05.409209    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:22:05.420647    4660 logs.go:276] 0 containers: []
	W0904 13:22:05.420657    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:22:05.420701    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:22:05.431082    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:22:05.431094    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:22:05.431100    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:22:05.435470    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:22:05.435477    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:22:05.473134    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:22:05.473143    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:22:05.485113    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:22:05.485124    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:22:05.509816    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:22:05.509827    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:22:05.521859    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:22:05.521872    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:22:05.557937    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:22:05.557946    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:22:05.572007    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:22:05.572018    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:22:05.585738    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:22:05.585749    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:22:05.597415    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:22:05.597425    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:22:05.611871    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:22:05.611883    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:22:05.623694    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:22:05.623707    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:22:05.641179    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:22:05.641189    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:22:08.154543    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:22:13.157313    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:22:13.157798    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:22:13.199842    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:22:13.199934    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:22:13.224997    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:22:13.225099    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:22:13.239295    4660 logs.go:276] 2 containers: [ce21979776a3 fd6fc2bac646]
	I0904 13:22:13.239368    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:22:13.251008    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:22:13.251078    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:22:13.261266    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:22:13.261332    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:22:13.271520    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:22:13.271589    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:22:13.283494    4660 logs.go:276] 0 containers: []
	W0904 13:22:13.283504    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:22:13.283551    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:22:13.293949    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:22:13.293965    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:22:13.293971    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:22:13.308937    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:22:13.308950    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:22:13.324904    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:22:13.324917    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:22:13.335906    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:22:13.335917    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:22:13.358942    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:22:13.358949    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:22:13.375879    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:22:13.375890    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:22:13.387123    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:22:13.387135    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:22:13.421932    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:22:13.421946    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:22:13.436109    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:22:13.436120    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:22:13.448372    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:22:13.448385    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:22:13.466320    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:22:13.466331    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:22:13.482706    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:22:13.482720    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:22:13.517164    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:22:13.517172    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:22:16.022313    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:22:21.024479    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:22:21.024555    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:22:21.038848    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:22:21.038922    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:22:21.050939    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:22:21.050991    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:22:21.062118    4660 logs.go:276] 4 containers: [af22e408a2fb e267f8ea563f ce21979776a3 fd6fc2bac646]
	I0904 13:22:21.062183    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:22:21.073785    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:22:21.073832    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:22:21.087038    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:22:21.087096    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:22:21.098161    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:22:21.098220    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:22:21.109194    4660 logs.go:276] 0 containers: []
	W0904 13:22:21.109206    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:22:21.109257    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:22:21.120845    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:22:21.120863    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:22:21.120869    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:22:21.136465    4660 logs.go:123] Gathering logs for coredns [af22e408a2fb] ...
	I0904 13:22:21.136477    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22e408a2fb"
	I0904 13:22:21.149722    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:22:21.149734    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:22:21.166668    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:22:21.166677    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:22:21.180926    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:22:21.180937    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:22:21.202386    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:22:21.202399    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:22:21.215234    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:22:21.215244    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:22:21.228106    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:22:21.228120    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:22:21.266189    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:22:21.266206    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:22:21.302982    4660 logs.go:123] Gathering logs for coredns [e267f8ea563f] ...
	I0904 13:22:21.302995    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e267f8ea563f"
	I0904 13:22:21.315294    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:22:21.315308    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:22:21.327758    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:22:21.327770    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:22:21.332265    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:22:21.332276    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:22:21.345225    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:22:21.345240    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:22:21.364178    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:22:21.364192    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:22:23.891768    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:22:28.894463    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:22:28.894871    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:22:28.931235    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:22:28.931368    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:22:28.952385    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:22:28.952501    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:22:28.970474    4660 logs.go:276] 4 containers: [af22e408a2fb e267f8ea563f ce21979776a3 fd6fc2bac646]
	I0904 13:22:28.970551    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:22:28.982997    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:22:28.983068    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:22:28.993981    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:22:28.994047    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:22:29.004329    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:22:29.004389    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:22:29.018324    4660 logs.go:276] 0 containers: []
	W0904 13:22:29.018338    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:22:29.018399    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:22:29.029020    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:22:29.029036    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:22:29.029041    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:22:29.044131    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:22:29.044142    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:22:29.061892    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:22:29.061902    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:22:29.087537    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:22:29.087546    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:22:29.091664    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:22:29.091673    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:22:29.130500    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:22:29.130510    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:22:29.147064    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:22:29.147076    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:22:29.161524    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:22:29.161532    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:22:29.176895    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:22:29.176909    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:22:29.190579    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:22:29.190591    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:22:29.224896    4660 logs.go:123] Gathering logs for coredns [af22e408a2fb] ...
	I0904 13:22:29.224905    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22e408a2fb"
	I0904 13:22:29.237217    4660 logs.go:123] Gathering logs for coredns [e267f8ea563f] ...
	I0904 13:22:29.237231    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e267f8ea563f"
	I0904 13:22:29.248602    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:22:29.248612    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:22:29.259889    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:22:29.259904    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:22:29.271122    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:22:29.271133    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:22:31.787167    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:22:36.789977    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:22:36.790437    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:22:36.829707    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:22:36.829836    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:22:36.851268    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:22:36.851355    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:22:36.866841    4660 logs.go:276] 4 containers: [af22e408a2fb e267f8ea563f ce21979776a3 fd6fc2bac646]
	I0904 13:22:36.866917    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:22:36.879481    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:22:36.879549    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:22:36.890578    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:22:36.890645    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:22:36.902032    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:22:36.902087    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:22:36.912330    4660 logs.go:276] 0 containers: []
	W0904 13:22:36.912340    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:22:36.912397    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:22:36.922749    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:22:36.922770    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:22:36.922776    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:22:36.934452    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:22:36.934465    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:22:36.970202    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:22:36.970216    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:22:36.984705    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:22:36.984718    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:22:37.008156    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:22:37.008164    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:22:37.019807    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:22:37.019820    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:22:37.031606    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:22:37.031619    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:22:37.043381    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:22:37.043391    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:22:37.063673    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:22:37.063685    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:22:37.075386    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:22:37.075398    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:22:37.079750    4660 logs.go:123] Gathering logs for coredns [af22e408a2fb] ...
	I0904 13:22:37.079758    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22e408a2fb"
	I0904 13:22:37.091721    4660 logs.go:123] Gathering logs for coredns [e267f8ea563f] ...
	I0904 13:22:37.091735    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e267f8ea563f"
	I0904 13:22:37.103664    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:22:37.103675    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:22:37.138129    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:22:37.138139    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:22:37.153198    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:22:37.153210    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:22:39.669507    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:22:44.671865    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:22:44.672252    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:22:44.721007    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:22:44.721127    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:22:44.740513    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:22:44.740606    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:22:44.756440    4660 logs.go:276] 4 containers: [af22e408a2fb e267f8ea563f ce21979776a3 fd6fc2bac646]
	I0904 13:22:44.756528    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:22:44.770506    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:22:44.770579    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:22:44.783599    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:22:44.783670    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:22:44.795341    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:22:44.795422    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:22:44.807140    4660 logs.go:276] 0 containers: []
	W0904 13:22:44.807151    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:22:44.807197    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:22:44.818689    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:22:44.818706    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:22:44.818713    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:22:44.837305    4660 logs.go:123] Gathering logs for coredns [e267f8ea563f] ...
	I0904 13:22:44.837317    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e267f8ea563f"
	I0904 13:22:44.856765    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:22:44.856777    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:22:44.869389    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:22:44.869400    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:22:44.906723    4660 logs.go:123] Gathering logs for coredns [af22e408a2fb] ...
	I0904 13:22:44.906743    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22e408a2fb"
	I0904 13:22:44.919397    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:22:44.919409    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:22:44.932014    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:22:44.932027    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:22:44.954377    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:22:44.954392    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:22:44.979440    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:22:44.979456    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:22:44.992853    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:22:44.992865    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:22:45.008599    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:22:45.008611    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:22:45.046755    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:22:45.046764    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:22:45.051028    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:22:45.051034    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:22:45.063411    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:22:45.063423    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:22:45.077236    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:22:45.077251    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:22:47.595375    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:22:52.597651    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:22:52.598136    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:22:52.640051    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:22:52.640203    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:22:52.662902    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:22:52.663011    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:22:52.678454    4660 logs.go:276] 4 containers: [af22e408a2fb e267f8ea563f ce21979776a3 fd6fc2bac646]
	I0904 13:22:52.678534    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:22:52.691182    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:22:52.691248    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:22:52.702214    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:22:52.702285    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:22:52.714179    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:22:52.714249    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:22:52.724626    4660 logs.go:276] 0 containers: []
	W0904 13:22:52.724638    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:22:52.724695    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:22:52.741124    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:22:52.741145    4660 logs.go:123] Gathering logs for coredns [e267f8ea563f] ...
	I0904 13:22:52.741151    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e267f8ea563f"
	I0904 13:22:52.753980    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:22:52.753992    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:22:52.771014    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:22:52.771028    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:22:52.784151    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:22:52.784162    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:22:52.808242    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:22:52.808251    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:22:52.841871    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:22:52.841886    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:22:52.856110    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:22:52.856121    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:22:52.868109    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:22:52.868119    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:22:52.882657    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:22:52.882670    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:22:52.901435    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:22:52.901445    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:22:52.915340    4660 logs.go:123] Gathering logs for coredns [af22e408a2fb] ...
	I0904 13:22:52.915350    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22e408a2fb"
	I0904 13:22:52.926784    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:22:52.926796    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:22:52.942289    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:22:52.942298    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:22:52.954216    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:22:52.954227    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:22:52.988109    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:22:52.988119    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:22:55.494264    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:23:00.496693    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:23:00.497122    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:23:00.536694    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:23:00.536816    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:23:00.558591    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:23:00.558688    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:23:00.574258    4660 logs.go:276] 4 containers: [af22e408a2fb e267f8ea563f ce21979776a3 fd6fc2bac646]
	I0904 13:23:00.574332    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:23:00.586789    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:23:00.586859    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:23:00.598261    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:23:00.598331    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:23:00.608881    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:23:00.608940    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:23:00.619724    4660 logs.go:276] 0 containers: []
	W0904 13:23:00.619735    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:23:00.619791    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:23:00.630695    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:23:00.630713    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:23:00.630718    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:23:00.664339    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:23:00.664353    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:23:00.687949    4660 logs.go:123] Gathering logs for coredns [e267f8ea563f] ...
	I0904 13:23:00.687963    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e267f8ea563f"
	I0904 13:23:00.700253    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:23:00.700264    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:23:00.714894    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:23:00.714904    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:23:00.726776    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:23:00.726790    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:23:00.763207    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:23:00.763217    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:23:00.767477    4660 logs.go:123] Gathering logs for coredns [af22e408a2fb] ...
	I0904 13:23:00.767485    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22e408a2fb"
	I0904 13:23:00.790037    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:23:00.790049    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:23:00.817680    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:23:00.817695    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:23:00.834826    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:23:00.834837    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:23:00.858929    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:23:00.858943    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:23:00.873487    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:23:00.873498    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:23:00.899427    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:23:00.899439    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:23:00.916627    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:23:00.916640    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:23:03.430858    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:23:08.433622    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:23:08.434070    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:23:08.473154    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:23:08.473282    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:23:08.493654    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:23:08.493734    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:23:08.508714    4660 logs.go:276] 4 containers: [af22e408a2fb e267f8ea563f ce21979776a3 fd6fc2bac646]
	I0904 13:23:08.508789    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:23:08.521527    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:23:08.521592    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:23:08.532142    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:23:08.532204    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:23:08.542687    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:23:08.542756    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:23:08.553259    4660 logs.go:276] 0 containers: []
	W0904 13:23:08.553269    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:23:08.553324    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:23:08.563949    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:23:08.563970    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:23:08.563987    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:23:08.599019    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:23:08.599027    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:23:08.633781    4660 logs.go:123] Gathering logs for coredns [af22e408a2fb] ...
	I0904 13:23:08.633794    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22e408a2fb"
	I0904 13:23:08.645860    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:23:08.645872    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:23:08.657455    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:23:08.657466    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:23:08.686395    4660 logs.go:123] Gathering logs for coredns [e267f8ea563f] ...
	I0904 13:23:08.686409    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e267f8ea563f"
	I0904 13:23:08.697942    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:23:08.697953    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:23:08.709782    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:23:08.709793    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:23:08.727617    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:23:08.727630    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:23:08.739061    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:23:08.739071    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:23:08.764680    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:23:08.764691    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:23:08.777030    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:23:08.777040    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:23:08.781666    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:23:08.781676    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:23:08.795867    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:23:08.795879    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:23:08.807721    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:23:08.807732    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:23:11.324656    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:23:16.326804    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:23:16.327027    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:23:16.358027    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:23:16.358128    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:23:16.377340    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:23:16.377409    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:23:16.391243    4660 logs.go:276] 4 containers: [af22e408a2fb e267f8ea563f ce21979776a3 fd6fc2bac646]
	I0904 13:23:16.391316    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:23:16.402813    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:23:16.402885    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:23:16.414902    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:23:16.414961    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:23:16.433538    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:23:16.433599    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:23:16.443725    4660 logs.go:276] 0 containers: []
	W0904 13:23:16.443739    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:23:16.443800    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:23:16.454470    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:23:16.454489    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:23:16.454495    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:23:16.460099    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:23:16.460113    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:23:16.495134    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:23:16.495147    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:23:16.507285    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:23:16.507296    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:23:16.519438    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:23:16.519449    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:23:16.531593    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:23:16.531604    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:23:16.543249    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:23:16.543263    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:23:16.568620    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:23:16.568629    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:23:16.603726    4660 logs.go:123] Gathering logs for coredns [af22e408a2fb] ...
	I0904 13:23:16.603737    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22e408a2fb"
	I0904 13:23:16.616742    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:23:16.616754    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:23:16.631607    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:23:16.631619    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:23:16.649264    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:23:16.649275    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:23:16.663329    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:23:16.663339    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:23:16.677551    4660 logs.go:123] Gathering logs for coredns [e267f8ea563f] ...
	I0904 13:23:16.677562    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e267f8ea563f"
	I0904 13:23:16.689094    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:23:16.689111    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:23:19.203466    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:23:24.205825    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:23:24.206101    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:23:24.233863    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:23:24.233973    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:23:24.255367    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:23:24.255442    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:23:24.268520    4660 logs.go:276] 4 containers: [af22e408a2fb e267f8ea563f ce21979776a3 fd6fc2bac646]
	I0904 13:23:24.268592    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:23:24.279315    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:23:24.279379    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:23:24.289744    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:23:24.289813    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:23:24.300307    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:23:24.300368    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:23:24.310261    4660 logs.go:276] 0 containers: []
	W0904 13:23:24.310284    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:23:24.310333    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:23:24.328334    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:23:24.328353    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:23:24.328360    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:23:24.332749    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:23:24.332772    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:23:24.347128    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:23:24.347139    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:23:24.359271    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:23:24.359283    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:23:24.374268    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:23:24.374281    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:23:24.391959    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:23:24.391969    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:23:24.416873    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:23:24.416880    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:23:24.451483    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:23:24.451497    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:23:24.463244    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:23:24.463256    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:23:24.498185    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:23:24.498193    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:23:24.512059    4660 logs.go:123] Gathering logs for coredns [af22e408a2fb] ...
	I0904 13:23:24.512072    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22e408a2fb"
	I0904 13:23:24.523509    4660 logs.go:123] Gathering logs for coredns [e267f8ea563f] ...
	I0904 13:23:24.523520    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e267f8ea563f"
	I0904 13:23:24.535267    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:23:24.535280    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:23:24.546798    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:23:24.546812    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:23:24.558101    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:23:24.558113    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:23:27.071972    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:23:32.074799    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:23:32.075191    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:23:32.114786    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:23:32.114928    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:23:32.136834    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:23:32.136946    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:23:32.152448    4660 logs.go:276] 4 containers: [af22e408a2fb e267f8ea563f ce21979776a3 fd6fc2bac646]
	I0904 13:23:32.152525    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:23:32.168013    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:23:32.168077    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:23:32.178676    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:23:32.178740    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:23:32.189339    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:23:32.189419    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:23:32.200308    4660 logs.go:276] 0 containers: []
	W0904 13:23:32.200318    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:23:32.200374    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:23:32.210553    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:23:32.210570    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:23:32.210576    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:23:32.226250    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:23:32.226263    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:23:32.244174    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:23:32.244186    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:23:32.267354    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:23:32.267361    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:23:32.271456    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:23:32.271465    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:23:32.305233    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:23:32.305247    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:23:32.320459    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:23:32.320468    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:23:32.332075    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:23:32.332085    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:23:32.366741    4660 logs.go:123] Gathering logs for coredns [e267f8ea563f] ...
	I0904 13:23:32.366748    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e267f8ea563f"
	I0904 13:23:32.378110    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:23:32.378124    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:23:32.390441    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:23:32.390456    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:23:32.402427    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:23:32.402441    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:23:32.417059    4660 logs.go:123] Gathering logs for coredns [af22e408a2fb] ...
	I0904 13:23:32.417073    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22e408a2fb"
	I0904 13:23:32.433544    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:23:32.433555    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:23:32.449406    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:23:32.449420    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:23:34.960804    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:23:39.961011    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:23:39.961353    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:23:39.989858    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:23:39.989970    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:23:40.008949    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:23:40.009029    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:23:40.025984    4660 logs.go:276] 4 containers: [af22e408a2fb e267f8ea563f ce21979776a3 fd6fc2bac646]
	I0904 13:23:40.026056    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:23:40.036895    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:23:40.036961    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:23:40.047240    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:23:40.047303    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:23:40.058170    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:23:40.058241    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:23:40.068684    4660 logs.go:276] 0 containers: []
	W0904 13:23:40.068695    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:23:40.068751    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:23:40.078933    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:23:40.078949    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:23:40.078955    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:23:40.112357    4660 logs.go:123] Gathering logs for coredns [af22e408a2fb] ...
	I0904 13:23:40.112365    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22e408a2fb"
	I0904 13:23:40.123676    4660 logs.go:123] Gathering logs for coredns [e267f8ea563f] ...
	I0904 13:23:40.123692    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e267f8ea563f"
	I0904 13:23:40.135351    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:23:40.135365    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:23:40.176016    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:23:40.176029    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:23:40.187926    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:23:40.187935    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:23:40.203674    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:23:40.203690    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:23:40.231300    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:23:40.231323    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:23:40.249217    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:23:40.249228    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:23:40.264626    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:23:40.264639    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:23:40.281576    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:23:40.281586    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:23:40.286502    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:23:40.286510    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:23:40.300979    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:23:40.300989    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:23:40.317113    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:23:40.317124    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:23:40.328519    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:23:40.328529    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:23:42.855436    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:23:47.857967    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:23:47.858031    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:23:47.869694    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:23:47.869760    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:23:47.881193    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:23:47.881269    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:23:47.891882    4660 logs.go:276] 4 containers: [af22e408a2fb e267f8ea563f ce21979776a3 fd6fc2bac646]
	I0904 13:23:47.891942    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:23:47.902788    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:23:47.902848    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:23:47.914072    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:23:47.914137    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:23:47.932579    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:23:47.932645    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:23:47.942919    4660 logs.go:276] 0 containers: []
	W0904 13:23:47.942930    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:23:47.942981    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:23:47.954347    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:23:47.954363    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:23:47.954369    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:23:47.967089    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:23:47.967102    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:23:47.979825    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:23:47.979837    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:23:47.998262    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:23:47.998271    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:23:48.021375    4660 logs.go:123] Gathering logs for coredns [af22e408a2fb] ...
	I0904 13:23:48.021383    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22e408a2fb"
	I0904 13:23:48.032658    4660 logs.go:123] Gathering logs for coredns [e267f8ea563f] ...
	I0904 13:23:48.032668    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e267f8ea563f"
	I0904 13:23:48.048280    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:23:48.048296    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:23:48.071597    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:23:48.071610    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:23:48.086038    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:23:48.086048    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:23:48.100285    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:23:48.100297    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:23:48.133564    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:23:48.133575    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:23:48.137809    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:23:48.137817    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:23:48.175917    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:23:48.175932    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:23:48.187972    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:23:48.187983    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:23:48.201627    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:23:48.201638    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:23:50.715490    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:23:55.718207    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:23:55.718628    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0904 13:23:55.758761    4660 logs.go:276] 1 containers: [b2da447975c0]
	I0904 13:23:55.758879    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0904 13:23:55.782570    4660 logs.go:276] 1 containers: [85b7558d1af2]
	I0904 13:23:55.782684    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0904 13:23:55.798049    4660 logs.go:276] 4 containers: [af22e408a2fb e267f8ea563f ce21979776a3 fd6fc2bac646]
	I0904 13:23:55.798118    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0904 13:23:55.810373    4660 logs.go:276] 1 containers: [30e001967d9c]
	I0904 13:23:55.810439    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0904 13:23:55.821044    4660 logs.go:276] 1 containers: [9ebf521cf3f1]
	I0904 13:23:55.821100    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0904 13:23:55.831051    4660 logs.go:276] 1 containers: [088cbdfececb]
	I0904 13:23:55.831118    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0904 13:23:55.841713    4660 logs.go:276] 0 containers: []
	W0904 13:23:55.841724    4660 logs.go:278] No container was found matching "kindnet"
	I0904 13:23:55.841779    4660 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0904 13:23:55.852699    4660 logs.go:276] 1 containers: [7e30c9e0d4c7]
	I0904 13:23:55.852717    4660 logs.go:123] Gathering logs for describe nodes ...
	I0904 13:23:55.852725    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 13:23:55.890733    4660 logs.go:123] Gathering logs for coredns [e267f8ea563f] ...
	I0904 13:23:55.890744    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e267f8ea563f"
	I0904 13:23:55.909089    4660 logs.go:123] Gathering logs for Docker ...
	I0904 13:23:55.909103    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0904 13:23:55.932019    4660 logs.go:123] Gathering logs for dmesg ...
	I0904 13:23:55.932027    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 13:23:55.936237    4660 logs.go:123] Gathering logs for coredns [af22e408a2fb] ...
	I0904 13:23:55.936242    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22e408a2fb"
	I0904 13:23:55.948398    4660 logs.go:123] Gathering logs for coredns [ce21979776a3] ...
	I0904 13:23:55.948413    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce21979776a3"
	I0904 13:23:55.964419    4660 logs.go:123] Gathering logs for kube-controller-manager [088cbdfececb] ...
	I0904 13:23:55.964433    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088cbdfececb"
	I0904 13:23:55.982202    4660 logs.go:123] Gathering logs for kubelet ...
	I0904 13:23:55.982216    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 13:23:56.017630    4660 logs.go:123] Gathering logs for kube-apiserver [b2da447975c0] ...
	I0904 13:23:56.017640    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2da447975c0"
	I0904 13:23:56.031427    4660 logs.go:123] Gathering logs for kube-scheduler [30e001967d9c] ...
	I0904 13:23:56.031438    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e001967d9c"
	I0904 13:23:56.046237    4660 logs.go:123] Gathering logs for kube-proxy [9ebf521cf3f1] ...
	I0904 13:23:56.046248    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ebf521cf3f1"
	I0904 13:23:56.058033    4660 logs.go:123] Gathering logs for etcd [85b7558d1af2] ...
	I0904 13:23:56.058047    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85b7558d1af2"
	I0904 13:23:56.072276    4660 logs.go:123] Gathering logs for coredns [fd6fc2bac646] ...
	I0904 13:23:56.072289    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6fc2bac646"
	I0904 13:23:56.088387    4660 logs.go:123] Gathering logs for storage-provisioner [7e30c9e0d4c7] ...
	I0904 13:23:56.088400    4660 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e30c9e0d4c7"
	I0904 13:23:56.099774    4660 logs.go:123] Gathering logs for container status ...
	I0904 13:23:56.099786    4660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 13:23:58.613284    4660 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0904 13:24:03.615433    4660 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0904 13:24:03.621464    4660 out.go:201] 
	W0904 13:24:03.626537    4660 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0904 13:24:03.626569    4660 out.go:270] * 
	* 
	W0904 13:24:03.629002    4660 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:24:03.644391    4660 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-175000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.95s)

                                                
                                    
x
+
TestPause/serial/Start (9.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-683000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-683000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.802852792s)

                                                
                                                
-- stdout --
	* [pause-683000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-683000" primary control-plane node in "pause-683000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-683000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-683000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-683000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-683000 -n pause-683000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-683000 -n pause-683000: exit status 7 (64.782125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-683000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-388000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-388000 --driver=qemu2 : exit status 80 (9.804556625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-388000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-388000" primary control-plane node in "NoKubernetes-388000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-388000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-388000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-388000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-388000 -n NoKubernetes-388000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-388000 -n NoKubernetes-388000: exit status 7 (67.328958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-388000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-388000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-388000 --no-kubernetes --driver=qemu2 : exit status 80 (5.248574541s)

                                                
                                                
-- stdout --
	* [NoKubernetes-388000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-388000
	* Restarting existing qemu2 VM for "NoKubernetes-388000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-388000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-388000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-388000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-388000 -n NoKubernetes-388000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-388000 -n NoKubernetes-388000: exit status 7 (42.2025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-388000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-388000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-388000 --no-kubernetes --driver=qemu2 : exit status 80 (5.239414917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-388000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-388000
	* Restarting existing qemu2 VM for "NoKubernetes-388000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-388000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-388000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-388000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-388000 -n NoKubernetes-388000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-388000 -n NoKubernetes-388000: exit status 7 (65.28525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-388000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-388000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-388000 --driver=qemu2 : exit status 80 (5.242785s)

                                                
                                                
-- stdout --
	* [NoKubernetes-388000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-388000
	* Restarting existing qemu2 VM for "NoKubernetes-388000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-388000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-388000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-388000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-388000 -n NoKubernetes-388000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-388000 -n NoKubernetes-388000: exit status 7 (38.977916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-388000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.919755584s)

                                                
                                                
-- stdout --
	* [auto-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-134000" primary control-plane node in "auto-134000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-134000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:22:10.786386    4945 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:22:10.786522    4945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:22:10.786526    4945 out.go:358] Setting ErrFile to fd 2...
	I0904 13:22:10.786528    4945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:22:10.786647    4945 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:22:10.787817    4945 out.go:352] Setting JSON to false
	I0904 13:22:10.805705    4945 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4894,"bootTime":1725476436,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:22:10.805780    4945 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:22:10.812083    4945 out.go:177] * [auto-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:22:10.821906    4945 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:22:10.821956    4945 notify.go:220] Checking for updates...
	I0904 13:22:10.828879    4945 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:22:10.831918    4945 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:22:10.834905    4945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:22:10.837855    4945 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:22:10.840896    4945 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:22:10.844255    4945 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:22:10.844330    4945 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:22:10.844376    4945 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:22:10.847829    4945 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:22:10.854919    4945 start.go:297] selected driver: qemu2
	I0904 13:22:10.854926    4945 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:22:10.854932    4945 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:22:10.857306    4945 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:22:10.859900    4945 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:22:10.862924    4945 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:22:10.862941    4945 cni.go:84] Creating CNI manager for ""
	I0904 13:22:10.862949    4945 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:22:10.862953    4945 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 13:22:10.862986    4945 start.go:340] cluster config:
	{Name:auto-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/
run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:22:10.866726    4945 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:22:10.872866    4945 out.go:177] * Starting "auto-134000" primary control-plane node in "auto-134000" cluster
	I0904 13:22:10.876849    4945 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:22:10.876875    4945 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:22:10.876885    4945 cache.go:56] Caching tarball of preloaded images
	I0904 13:22:10.876971    4945 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:22:10.876979    4945 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:22:10.877046    4945 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/auto-134000/config.json ...
	I0904 13:22:10.877060    4945 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/auto-134000/config.json: {Name:mkdfed9d23dbaf605ddd089886c310246009df18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:22:10.877329    4945 start.go:360] acquireMachinesLock for auto-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:22:10.877372    4945 start.go:364] duration metric: took 31.75µs to acquireMachinesLock for "auto-134000"
	I0904 13:22:10.877384    4945 start.go:93] Provisioning new machine with config: &{Name:auto-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-134000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:22:10.877438    4945 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:22:10.879854    4945 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:22:10.895806    4945 start.go:159] libmachine.API.Create for "auto-134000" (driver="qemu2")
	I0904 13:22:10.895834    4945 client.go:168] LocalClient.Create starting
	I0904 13:22:10.895897    4945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:22:10.895927    4945 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:10.895936    4945 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:10.895977    4945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:22:10.895999    4945 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:10.896010    4945 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:10.896353    4945 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:22:11.050983    4945 main.go:141] libmachine: Creating SSH key...
	I0904 13:22:11.157804    4945 main.go:141] libmachine: Creating Disk image...
	I0904 13:22:11.157811    4945 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:22:11.157997    4945 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/disk.qcow2
	I0904 13:22:11.167230    4945 main.go:141] libmachine: STDOUT: 
	I0904 13:22:11.167251    4945 main.go:141] libmachine: STDERR: 
	I0904 13:22:11.167299    4945 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/disk.qcow2 +20000M
	I0904 13:22:11.175534    4945 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:22:11.175553    4945 main.go:141] libmachine: STDERR: 
	I0904 13:22:11.175572    4945 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/disk.qcow2
	I0904 13:22:11.175578    4945 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:22:11.175592    4945 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:22:11.175622    4945 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:75:87:c8:0b:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/disk.qcow2
	I0904 13:22:11.177315    4945 main.go:141] libmachine: STDOUT: 
	I0904 13:22:11.177335    4945 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:22:11.177358    4945 client.go:171] duration metric: took 281.522417ms to LocalClient.Create
	I0904 13:22:13.179568    4945 start.go:128] duration metric: took 2.302143709s to createHost
	I0904 13:22:13.179623    4945 start.go:83] releasing machines lock for "auto-134000", held for 2.302282458s
	W0904 13:22:13.179671    4945 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:22:13.185076    4945 out.go:177] * Deleting "auto-134000" in qemu2 ...
	W0904 13:22:13.203865    4945 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:22:13.203880    4945 start.go:729] Will try again in 5 seconds ...
	I0904 13:22:18.205997    4945 start.go:360] acquireMachinesLock for auto-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:22:18.206305    4945 start.go:364] duration metric: took 250.833µs to acquireMachinesLock for "auto-134000"
	I0904 13:22:18.206384    4945 start.go:93] Provisioning new machine with config: &{Name:auto-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-134000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:22:18.206510    4945 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:22:18.214871    4945 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:22:18.249755    4945 start.go:159] libmachine.API.Create for "auto-134000" (driver="qemu2")
	I0904 13:22:18.249825    4945 client.go:168] LocalClient.Create starting
	I0904 13:22:18.249960    4945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:22:18.250035    4945 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:18.250054    4945 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:18.250107    4945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:22:18.250147    4945 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:18.250163    4945 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:18.250635    4945 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:22:18.416403    4945 main.go:141] libmachine: Creating SSH key...
	I0904 13:22:18.617004    4945 main.go:141] libmachine: Creating Disk image...
	I0904 13:22:18.617015    4945 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:22:18.617226    4945 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/disk.qcow2
	I0904 13:22:18.626578    4945 main.go:141] libmachine: STDOUT: 
	I0904 13:22:18.626601    4945 main.go:141] libmachine: STDERR: 
	I0904 13:22:18.626646    4945 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/disk.qcow2 +20000M
	I0904 13:22:18.634647    4945 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:22:18.634671    4945 main.go:141] libmachine: STDERR: 
	I0904 13:22:18.634685    4945 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/disk.qcow2
	I0904 13:22:18.634692    4945 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:22:18.634700    4945 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:22:18.634735    4945 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:46:88:20:52:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/auto-134000/disk.qcow2
	I0904 13:22:18.636352    4945 main.go:141] libmachine: STDOUT: 
	I0904 13:22:18.636366    4945 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:22:18.636383    4945 client.go:171] duration metric: took 386.550959ms to LocalClient.Create
	I0904 13:22:20.638466    4945 start.go:128] duration metric: took 2.431981375s to createHost
	I0904 13:22:20.638504    4945 start.go:83] releasing machines lock for "auto-134000", held for 2.432211417s
	W0904 13:22:20.638706    4945 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:22:20.651031    4945 out.go:201] 
	W0904 13:22:20.655070    4945 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:22:20.655079    4945 out.go:270] * 
	* 
	W0904 13:22:20.656080    4945 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:22:20.668994    4945 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.783904917s)

                                                
                                                
-- stdout --
	* [calico-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-134000" primary control-plane node in "calico-134000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-134000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:22:22.832922    5056 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:22:22.833043    5056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:22:22.833047    5056 out.go:358] Setting ErrFile to fd 2...
	I0904 13:22:22.833049    5056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:22:22.833175    5056 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:22:22.834256    5056 out.go:352] Setting JSON to false
	I0904 13:22:22.850848    5056 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4906,"bootTime":1725476436,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:22:22.850924    5056 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:22:22.856056    5056 out.go:177] * [calico-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:22:22.864031    5056 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:22:22.864121    5056 notify.go:220] Checking for updates...
	I0904 13:22:22.871977    5056 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:22:22.874979    5056 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:22:22.877983    5056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:22:22.880948    5056 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:22:22.883984    5056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:22:22.887346    5056 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:22:22.887418    5056 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:22:22.887467    5056 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:22:22.891951    5056 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:22:22.898957    5056 start.go:297] selected driver: qemu2
	I0904 13:22:22.898963    5056 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:22:22.898969    5056 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:22:22.901080    5056 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:22:22.904051    5056 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:22:22.907096    5056 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:22:22.907119    5056 cni.go:84] Creating CNI manager for "calico"
	I0904 13:22:22.907139    5056 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0904 13:22:22.907170    5056 start.go:340] cluster config:
	{Name:calico-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnet
Path:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:22:22.910485    5056 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:22:22.918854    5056 out.go:177] * Starting "calico-134000" primary control-plane node in "calico-134000" cluster
	I0904 13:22:22.922977    5056 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:22:22.922998    5056 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:22:22.923006    5056 cache.go:56] Caching tarball of preloaded images
	I0904 13:22:22.923071    5056 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:22:22.923076    5056 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:22:22.923134    5056 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/calico-134000/config.json ...
	I0904 13:22:22.923146    5056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/calico-134000/config.json: {Name:mkaafc0a7b852426911db8d559574021324846be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:22:22.923564    5056 start.go:360] acquireMachinesLock for calico-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:22:22.923602    5056 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "calico-134000"
	I0904 13:22:22.923612    5056 start.go:93] Provisioning new machine with config: &{Name:calico-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-134000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:22:22.923642    5056 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:22:22.930945    5056 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:22:22.946884    5056 start.go:159] libmachine.API.Create for "calico-134000" (driver="qemu2")
	I0904 13:22:22.946911    5056 client.go:168] LocalClient.Create starting
	I0904 13:22:22.946975    5056 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:22:22.947008    5056 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:22.947020    5056 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:22.947056    5056 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:22:22.947081    5056 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:22.947094    5056 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:22.947517    5056 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:22:23.108793    5056 main.go:141] libmachine: Creating SSH key...
	I0904 13:22:23.161221    5056 main.go:141] libmachine: Creating Disk image...
	I0904 13:22:23.161225    5056 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:22:23.161422    5056 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/disk.qcow2
	I0904 13:22:23.170729    5056 main.go:141] libmachine: STDOUT: 
	I0904 13:22:23.170748    5056 main.go:141] libmachine: STDERR: 
	I0904 13:22:23.170797    5056 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/disk.qcow2 +20000M
	I0904 13:22:23.178663    5056 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:22:23.178684    5056 main.go:141] libmachine: STDERR: 
	I0904 13:22:23.178699    5056 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/disk.qcow2
	I0904 13:22:23.178704    5056 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:22:23.178714    5056 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:22:23.178747    5056 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:12:6e:7b:4c:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/disk.qcow2
	I0904 13:22:23.180410    5056 main.go:141] libmachine: STDOUT: 
	I0904 13:22:23.180427    5056 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:22:23.180449    5056 client.go:171] duration metric: took 233.536417ms to LocalClient.Create
	I0904 13:22:25.182613    5056 start.go:128] duration metric: took 2.258978875s to createHost
	I0904 13:22:25.182677    5056 start.go:83] releasing machines lock for "calico-134000", held for 2.259105333s
	W0904 13:22:25.182767    5056 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:22:25.198802    5056 out.go:177] * Deleting "calico-134000" in qemu2 ...
	W0904 13:22:25.221271    5056 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:22:25.221295    5056 start.go:729] Will try again in 5 seconds ...
	I0904 13:22:30.223379    5056 start.go:360] acquireMachinesLock for calico-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:22:30.223690    5056 start.go:364] duration metric: took 249.042µs to acquireMachinesLock for "calico-134000"
	I0904 13:22:30.223727    5056 start.go:93] Provisioning new machine with config: &{Name:calico-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-134000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:22:30.223847    5056 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:22:30.233215    5056 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:22:30.263585    5056 start.go:159] libmachine.API.Create for "calico-134000" (driver="qemu2")
	I0904 13:22:30.263632    5056 client.go:168] LocalClient.Create starting
	I0904 13:22:30.263730    5056 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:22:30.263792    5056 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:30.263814    5056 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:30.263862    5056 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:22:30.263899    5056 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:30.263911    5056 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:30.264373    5056 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:22:30.425797    5056 main.go:141] libmachine: Creating SSH key...
	I0904 13:22:30.525045    5056 main.go:141] libmachine: Creating Disk image...
	I0904 13:22:30.525053    5056 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:22:30.525283    5056 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/disk.qcow2
	I0904 13:22:30.535045    5056 main.go:141] libmachine: STDOUT: 
	I0904 13:22:30.535062    5056 main.go:141] libmachine: STDERR: 
	I0904 13:22:30.535116    5056 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/disk.qcow2 +20000M
	I0904 13:22:30.543477    5056 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:22:30.543496    5056 main.go:141] libmachine: STDERR: 
	I0904 13:22:30.543506    5056 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/disk.qcow2
	I0904 13:22:30.543510    5056 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:22:30.543521    5056 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:22:30.543557    5056 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:31:b1:46:e4:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/calico-134000/disk.qcow2
	I0904 13:22:30.545291    5056 main.go:141] libmachine: STDOUT: 
	I0904 13:22:30.545410    5056 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:22:30.545422    5056 client.go:171] duration metric: took 281.79ms to LocalClient.Create
	I0904 13:22:32.547483    5056 start.go:128] duration metric: took 2.323656042s to createHost
	I0904 13:22:32.547520    5056 start.go:83] releasing machines lock for "calico-134000", held for 2.323854459s
	W0904 13:22:32.547668    5056 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:22:32.556962    5056 out.go:201] 
	W0904 13:22:32.564972    5056 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:22:32.564978    5056 out.go:270] * 
	* 
	W0904 13:22:32.565746    5056 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:22:32.579945    5056 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.808400417s)

                                                
                                                
-- stdout --
	* [custom-flannel-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-134000" primary control-plane node in "custom-flannel-134000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-134000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:22:34.897196    5175 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:22:34.897343    5175 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:22:34.897371    5175 out.go:358] Setting ErrFile to fd 2...
	I0904 13:22:34.897403    5175 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:22:34.897778    5175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:22:34.899179    5175 out.go:352] Setting JSON to false
	I0904 13:22:34.915783    5175 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4918,"bootTime":1725476436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:22:34.915847    5175 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:22:34.923038    5175 out.go:177] * [custom-flannel-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:22:34.931075    5175 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:22:34.931106    5175 notify.go:220] Checking for updates...
	I0904 13:22:34.937948    5175 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:22:34.940982    5175 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:22:34.943979    5175 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:22:34.946988    5175 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:22:34.950005    5175 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:22:34.953317    5175 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:22:34.953390    5175 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:22:34.953435    5175 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:22:34.957933    5175 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:22:34.964961    5175 start.go:297] selected driver: qemu2
	I0904 13:22:34.964967    5175 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:22:34.964974    5175 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:22:34.967306    5175 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:22:34.969986    5175 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:22:34.973139    5175 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:22:34.973201    5175 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0904 13:22:34.973211    5175 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0904 13:22:34.973251    5175 start.go:340] cluster config:
	{Name:custom-flannel-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet
/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:22:34.976975    5175 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:22:34.985927    5175 out.go:177] * Starting "custom-flannel-134000" primary control-plane node in "custom-flannel-134000" cluster
	I0904 13:22:34.988969    5175 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:22:34.988986    5175 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:22:34.988993    5175 cache.go:56] Caching tarball of preloaded images
	I0904 13:22:34.989069    5175 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:22:34.989078    5175 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:22:34.989159    5175 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/custom-flannel-134000/config.json ...
	I0904 13:22:34.989179    5175 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/custom-flannel-134000/config.json: {Name:mk1ffb1ac27b5ace12f3bd45c75f8c805eb79cf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:22:34.989709    5175 start.go:360] acquireMachinesLock for custom-flannel-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:22:34.989746    5175 start.go:364] duration metric: took 29.458µs to acquireMachinesLock for "custom-flannel-134000"
	I0904 13:22:34.989760    5175 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flann
el-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:22:34.989798    5175 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:22:34.995994    5175 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:22:35.012948    5175 start.go:159] libmachine.API.Create for "custom-flannel-134000" (driver="qemu2")
	I0904 13:22:35.012970    5175 client.go:168] LocalClient.Create starting
	I0904 13:22:35.013030    5175 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:22:35.013079    5175 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:35.013089    5175 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:35.013119    5175 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:22:35.013142    5175 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:35.013148    5175 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:35.013483    5175 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:22:35.172488    5175 main.go:141] libmachine: Creating SSH key...
	I0904 13:22:35.268314    5175 main.go:141] libmachine: Creating Disk image...
	I0904 13:22:35.268332    5175 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:22:35.268551    5175 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/disk.qcow2
	I0904 13:22:35.278075    5175 main.go:141] libmachine: STDOUT: 
	I0904 13:22:35.278093    5175 main.go:141] libmachine: STDERR: 
	I0904 13:22:35.278158    5175 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/disk.qcow2 +20000M
	I0904 13:22:35.286607    5175 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:22:35.286628    5175 main.go:141] libmachine: STDERR: 
	I0904 13:22:35.286648    5175 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/disk.qcow2
	I0904 13:22:35.286654    5175 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:22:35.286669    5175 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:22:35.286701    5175 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:01:9d:1a:7e:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/disk.qcow2
	I0904 13:22:35.288468    5175 main.go:141] libmachine: STDOUT: 
	I0904 13:22:35.288483    5175 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:22:35.288502    5175 client.go:171] duration metric: took 275.532167ms to LocalClient.Create
	I0904 13:22:37.290573    5175 start.go:128] duration metric: took 2.300804125s to createHost
	I0904 13:22:37.290597    5175 start.go:83] releasing machines lock for "custom-flannel-134000", held for 2.300884583s
	W0904 13:22:37.290645    5175 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:22:37.300288    5175 out.go:177] * Deleting "custom-flannel-134000" in qemu2 ...
	W0904 13:22:37.317541    5175 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:22:37.317548    5175 start.go:729] Will try again in 5 seconds ...
	I0904 13:22:42.319556    5175 start.go:360] acquireMachinesLock for custom-flannel-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:22:42.320044    5175 start.go:364] duration metric: took 375.959µs to acquireMachinesLock for "custom-flannel-134000"
	I0904 13:22:42.320115    5175 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flann
el-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:22:42.320353    5175 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:22:42.329978    5175 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:22:42.374475    5175 start.go:159] libmachine.API.Create for "custom-flannel-134000" (driver="qemu2")
	I0904 13:22:42.374531    5175 client.go:168] LocalClient.Create starting
	I0904 13:22:42.374634    5175 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:22:42.374690    5175 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:42.374709    5175 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:42.374768    5175 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:22:42.374806    5175 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:42.374815    5175 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:42.375304    5175 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:22:42.541859    5175 main.go:141] libmachine: Creating SSH key...
	I0904 13:22:42.612235    5175 main.go:141] libmachine: Creating Disk image...
	I0904 13:22:42.612242    5175 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:22:42.612441    5175 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/disk.qcow2
	I0904 13:22:42.621989    5175 main.go:141] libmachine: STDOUT: 
	I0904 13:22:42.622010    5175 main.go:141] libmachine: STDERR: 
	I0904 13:22:42.622057    5175 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/disk.qcow2 +20000M
	I0904 13:22:42.630157    5175 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:22:42.630173    5175 main.go:141] libmachine: STDERR: 
	I0904 13:22:42.630186    5175 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/disk.qcow2
	I0904 13:22:42.630190    5175 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:22:42.630200    5175 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:22:42.630234    5175 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:8f:a5:d4:15:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/custom-flannel-134000/disk.qcow2
	I0904 13:22:42.631931    5175 main.go:141] libmachine: STDOUT: 
	I0904 13:22:42.631948    5175 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:22:42.631960    5175 client.go:171] duration metric: took 257.427833ms to LocalClient.Create
	I0904 13:22:44.634149    5175 start.go:128] duration metric: took 2.313805416s to createHost
	I0904 13:22:44.634226    5175 start.go:83] releasing machines lock for "custom-flannel-134000", held for 2.314195625s
	W0904 13:22:44.634630    5175 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:22:44.644274    5175 out.go:201] 
	W0904 13:22:44.652414    5175 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:22:44.652441    5175 out.go:270] * 
	* 
	W0904 13:22:44.655671    5175 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:22:44.664186    5175 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.89378975s)

                                                
                                                
-- stdout --
	* [false-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-134000" primary control-plane node in "false-134000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-134000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:22:47.080640    5296 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:22:47.080786    5296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:22:47.080790    5296 out.go:358] Setting ErrFile to fd 2...
	I0904 13:22:47.080792    5296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:22:47.080911    5296 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:22:47.082017    5296 out.go:352] Setting JSON to false
	I0904 13:22:47.098022    5296 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4931,"bootTime":1725476436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:22:47.098093    5296 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:22:47.104222    5296 out.go:177] * [false-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:22:47.111046    5296 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:22:47.111102    5296 notify.go:220] Checking for updates...
	I0904 13:22:47.117913    5296 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:22:47.121020    5296 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:22:47.124010    5296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:22:47.126906    5296 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:22:47.129950    5296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:22:47.133387    5296 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:22:47.133457    5296 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:22:47.133504    5296 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:22:47.137929    5296 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:22:47.144983    5296 start.go:297] selected driver: qemu2
	I0904 13:22:47.144990    5296 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:22:47.144996    5296 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:22:47.147106    5296 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:22:47.149903    5296 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:22:47.153024    5296 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:22:47.153041    5296 cni.go:84] Creating CNI manager for "false"
	I0904 13:22:47.153064    5296 start.go:340] cluster config:
	{Name:false-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkP
lugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/
var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:22:47.156336    5296 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:22:47.163966    5296 out.go:177] * Starting "false-134000" primary control-plane node in "false-134000" cluster
	I0904 13:22:47.168030    5296 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:22:47.168048    5296 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:22:47.168060    5296 cache.go:56] Caching tarball of preloaded images
	I0904 13:22:47.168146    5296 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:22:47.168152    5296 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:22:47.168217    5296 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/false-134000/config.json ...
	I0904 13:22:47.168235    5296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/false-134000/config.json: {Name:mk99ee7a081fe702a5142cecab0c5bce17962f4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:22:47.168449    5296 start.go:360] acquireMachinesLock for false-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:22:47.168480    5296 start.go:364] duration metric: took 25.375µs to acquireMachinesLock for "false-134000"
	I0904 13:22:47.168492    5296 start.go:93] Provisioning new machine with config: &{Name:false-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-134000 Namespac
e:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:22:47.168515    5296 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:22:47.176963    5296 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:22:47.192143    5296 start.go:159] libmachine.API.Create for "false-134000" (driver="qemu2")
	I0904 13:22:47.192166    5296 client.go:168] LocalClient.Create starting
	I0904 13:22:47.192228    5296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:22:47.192257    5296 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:47.192267    5296 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:47.192306    5296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:22:47.192328    5296 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:47.192337    5296 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:47.192713    5296 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:22:47.350140    5296 main.go:141] libmachine: Creating SSH key...
	I0904 13:22:47.484295    5296 main.go:141] libmachine: Creating Disk image...
	I0904 13:22:47.484304    5296 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:22:47.484526    5296 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/disk.qcow2
	I0904 13:22:47.494154    5296 main.go:141] libmachine: STDOUT: 
	I0904 13:22:47.494178    5296 main.go:141] libmachine: STDERR: 
	I0904 13:22:47.494229    5296 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/disk.qcow2 +20000M
	I0904 13:22:47.502105    5296 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:22:47.502120    5296 main.go:141] libmachine: STDERR: 
	I0904 13:22:47.502140    5296 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/disk.qcow2
	I0904 13:22:47.502145    5296 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:22:47.502162    5296 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:22:47.502198    5296 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:81:f8:af:2a:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/disk.qcow2
	I0904 13:22:47.503799    5296 main.go:141] libmachine: STDOUT: 
	I0904 13:22:47.503812    5296 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:22:47.503829    5296 client.go:171] duration metric: took 311.663292ms to LocalClient.Create
	I0904 13:22:49.506058    5296 start.go:128] duration metric: took 2.337552333s to createHost
	I0904 13:22:49.506131    5296 start.go:83] releasing machines lock for "false-134000", held for 2.337680875s
	W0904 13:22:49.506204    5296 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:22:49.517646    5296 out.go:177] * Deleting "false-134000" in qemu2 ...
	W0904 13:22:49.553787    5296 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:22:49.553818    5296 start.go:729] Will try again in 5 seconds ...
	I0904 13:22:54.556010    5296 start.go:360] acquireMachinesLock for false-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:22:54.556561    5296 start.go:364] duration metric: took 441.958µs to acquireMachinesLock for "false-134000"
	I0904 13:22:54.556783    5296 start.go:93] Provisioning new machine with config: &{Name:false-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-134000 Namespac
e:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:22:54.557100    5296 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:22:54.564749    5296 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:22:54.615733    5296 start.go:159] libmachine.API.Create for "false-134000" (driver="qemu2")
	I0904 13:22:54.615791    5296 client.go:168] LocalClient.Create starting
	I0904 13:22:54.615895    5296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:22:54.615965    5296 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:54.615981    5296 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:54.616056    5296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:22:54.616101    5296 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:54.616117    5296 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:54.616681    5296 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:22:54.780354    5296 main.go:141] libmachine: Creating SSH key...
	I0904 13:22:54.882892    5296 main.go:141] libmachine: Creating Disk image...
	I0904 13:22:54.882902    5296 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:22:54.883112    5296 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/disk.qcow2
	I0904 13:22:54.892433    5296 main.go:141] libmachine: STDOUT: 
	I0904 13:22:54.892453    5296 main.go:141] libmachine: STDERR: 
	I0904 13:22:54.892497    5296 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/disk.qcow2 +20000M
	I0904 13:22:54.900645    5296 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:22:54.900662    5296 main.go:141] libmachine: STDERR: 
	I0904 13:22:54.900674    5296 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/disk.qcow2
	I0904 13:22:54.900678    5296 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:22:54.900699    5296 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:22:54.900724    5296 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:a3:7f:86:dd:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/false-134000/disk.qcow2
	I0904 13:22:54.902393    5296 main.go:141] libmachine: STDOUT: 
	I0904 13:22:54.902411    5296 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:22:54.902423    5296 client.go:171] duration metric: took 286.631209ms to LocalClient.Create
	I0904 13:22:56.904584    5296 start.go:128] duration metric: took 2.347493333s to createHost
	I0904 13:22:56.904648    5296 start.go:83] releasing machines lock for "false-134000", held for 2.348036541s
	W0904 13:22:56.904993    5296 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:22:56.913101    5296 out.go:201] 
	W0904 13:22:56.920228    5296 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:22:56.920290    5296 out.go:270] * 
	* 
	W0904 13:22:56.923089    5296 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:22:56.933063    5296 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.821190334s)

                                                
                                                
-- stdout --
	* [kindnet-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-134000" primary control-plane node in "kindnet-134000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-134000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:22:59.139247    5411 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:22:59.139399    5411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:22:59.139405    5411 out.go:358] Setting ErrFile to fd 2...
	I0904 13:22:59.139407    5411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:22:59.139543    5411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:22:59.140692    5411 out.go:352] Setting JSON to false
	I0904 13:22:59.157056    5411 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4943,"bootTime":1725476436,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:22:59.157142    5411 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:22:59.163095    5411 out.go:177] * [kindnet-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:22:59.170991    5411 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:22:59.171035    5411 notify.go:220] Checking for updates...
	I0904 13:22:59.178021    5411 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:22:59.179537    5411 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:22:59.183025    5411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:22:59.186051    5411 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:22:59.189086    5411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:22:59.192423    5411 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:22:59.192493    5411 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:22:59.192539    5411 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:22:59.197027    5411 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:22:59.203988    5411 start.go:297] selected driver: qemu2
	I0904 13:22:59.203995    5411 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:22:59.204001    5411 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:22:59.206309    5411 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:22:59.210070    5411 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:22:59.213176    5411 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:22:59.213207    5411 cni.go:84] Creating CNI manager for "kindnet"
	I0904 13:22:59.213211    5411 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 13:22:59.213242    5411 start.go:340] cluster config:
	{Name:kindnet-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVM
netPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:22:59.217010    5411 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:22:59.226068    5411 out.go:177] * Starting "kindnet-134000" primary control-plane node in "kindnet-134000" cluster
	I0904 13:22:59.230007    5411 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:22:59.230020    5411 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:22:59.230026    5411 cache.go:56] Caching tarball of preloaded images
	I0904 13:22:59.230079    5411 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:22:59.230084    5411 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:22:59.230141    5411 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/kindnet-134000/config.json ...
	I0904 13:22:59.230154    5411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/kindnet-134000/config.json: {Name:mkb60bfb85f57f00a80e8992d49008f679e97f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:22:59.230567    5411 start.go:360] acquireMachinesLock for kindnet-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:22:59.230600    5411 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "kindnet-134000"
	I0904 13:22:59.230612    5411 start.go:93] Provisioning new machine with config: &{Name:kindnet-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-134000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:22:59.230648    5411 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:22:59.238988    5411 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:22:59.255206    5411 start.go:159] libmachine.API.Create for "kindnet-134000" (driver="qemu2")
	I0904 13:22:59.255228    5411 client.go:168] LocalClient.Create starting
	I0904 13:22:59.255288    5411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:22:59.255321    5411 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:59.255329    5411 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:59.255370    5411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:22:59.255392    5411 main.go:141] libmachine: Decoding PEM data...
	I0904 13:22:59.255401    5411 main.go:141] libmachine: Parsing certificate...
	I0904 13:22:59.255812    5411 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:22:59.414740    5411 main.go:141] libmachine: Creating SSH key...
	I0904 13:22:59.478451    5411 main.go:141] libmachine: Creating Disk image...
	I0904 13:22:59.478457    5411 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:22:59.478685    5411 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/disk.qcow2
	I0904 13:22:59.487934    5411 main.go:141] libmachine: STDOUT: 
	I0904 13:22:59.487955    5411 main.go:141] libmachine: STDERR: 
	I0904 13:22:59.488021    5411 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/disk.qcow2 +20000M
	I0904 13:22:59.495939    5411 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:22:59.495954    5411 main.go:141] libmachine: STDERR: 
	I0904 13:22:59.495966    5411 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/disk.qcow2
	I0904 13:22:59.495972    5411 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:22:59.495987    5411 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:22:59.496009    5411 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:b3:96:f5:7d:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/disk.qcow2
	I0904 13:22:59.497679    5411 main.go:141] libmachine: STDOUT: 
	I0904 13:22:59.497696    5411 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:22:59.497716    5411 client.go:171] duration metric: took 242.487ms to LocalClient.Create
	I0904 13:23:01.499871    5411 start.go:128] duration metric: took 2.269236084s to createHost
	I0904 13:23:01.499932    5411 start.go:83] releasing machines lock for "kindnet-134000", held for 2.26936125s
	W0904 13:23:01.500002    5411 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:23:01.510761    5411 out.go:177] * Deleting "kindnet-134000" in qemu2 ...
	W0904 13:23:01.538117    5411 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:23:01.538147    5411 start.go:729] Will try again in 5 seconds ...
	I0904 13:23:06.540310    5411 start.go:360] acquireMachinesLock for kindnet-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:23:06.540920    5411 start.go:364] duration metric: took 461.875µs to acquireMachinesLock for "kindnet-134000"
	I0904 13:23:06.541097    5411 start.go:93] Provisioning new machine with config: &{Name:kindnet-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-134000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:23:06.541448    5411 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:23:06.547145    5411 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:23:06.595647    5411 start.go:159] libmachine.API.Create for "kindnet-134000" (driver="qemu2")
	I0904 13:23:06.595702    5411 client.go:168] LocalClient.Create starting
	I0904 13:23:06.595841    5411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:23:06.595907    5411 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:06.595921    5411 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:06.595999    5411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:23:06.596044    5411 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:06.596059    5411 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:06.596547    5411 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:23:06.764898    5411 main.go:141] libmachine: Creating SSH key...
	I0904 13:23:06.879086    5411 main.go:141] libmachine: Creating Disk image...
	I0904 13:23:06.879095    5411 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:23:06.879293    5411 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/disk.qcow2
	I0904 13:23:06.888975    5411 main.go:141] libmachine: STDOUT: 
	I0904 13:23:06.888997    5411 main.go:141] libmachine: STDERR: 
	I0904 13:23:06.889046    5411 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/disk.qcow2 +20000M
	I0904 13:23:06.897390    5411 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:23:06.897407    5411 main.go:141] libmachine: STDERR: 
	I0904 13:23:06.897428    5411 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/disk.qcow2
	I0904 13:23:06.897433    5411 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:23:06.897443    5411 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:23:06.897467    5411 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:3c:96:9b:68:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kindnet-134000/disk.qcow2
	I0904 13:23:06.899114    5411 main.go:141] libmachine: STDOUT: 
	I0904 13:23:06.899132    5411 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:23:06.899145    5411 client.go:171] duration metric: took 303.442041ms to LocalClient.Create
	I0904 13:23:08.899581    5411 start.go:128] duration metric: took 2.358115958s to createHost
	I0904 13:23:08.899607    5411 start.go:83] releasing machines lock for "kindnet-134000", held for 2.358706459s
	W0904 13:23:08.899698    5411 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:23:08.905798    5411 out.go:201] 
	W0904 13:23:08.913591    5411 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:23:08.913597    5411 out.go:270] * 
	* 
	W0904 13:23:08.914153    5411 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:23:08.923743    5411 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.852462292s)

                                                
                                                
-- stdout --
	* [flannel-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-134000" primary control-plane node in "flannel-134000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-134000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:23:11.199535    5524 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:23:11.199645    5524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:23:11.199649    5524 out.go:358] Setting ErrFile to fd 2...
	I0904 13:23:11.199651    5524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:23:11.199810    5524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:23:11.200934    5524 out.go:352] Setting JSON to false
	I0904 13:23:11.217062    5524 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4955,"bootTime":1725476436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:23:11.217165    5524 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:23:11.224641    5524 out.go:177] * [flannel-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:23:11.231701    5524 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:23:11.231742    5524 notify.go:220] Checking for updates...
	I0904 13:23:11.238648    5524 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:23:11.241530    5524 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:23:11.244626    5524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:23:11.247623    5524 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:23:11.248886    5524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:23:11.251901    5524 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:23:11.251961    5524 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:23:11.252011    5524 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:23:11.256637    5524 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:23:11.262579    5524 start.go:297] selected driver: qemu2
	I0904 13:23:11.262584    5524 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:23:11.262589    5524 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:23:11.264641    5524 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:23:11.267665    5524 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:23:11.270815    5524 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:23:11.270843    5524 cni.go:84] Creating CNI manager for "flannel"
	I0904 13:23:11.270847    5524 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0904 13:23:11.270873    5524 start.go:340] cluster config:
	{Name:flannel-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVM
netPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:23:11.274332    5524 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:23:11.282615    5524 out.go:177] * Starting "flannel-134000" primary control-plane node in "flannel-134000" cluster
	I0904 13:23:11.286618    5524 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:23:11.286632    5524 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:23:11.286638    5524 cache.go:56] Caching tarball of preloaded images
	I0904 13:23:11.286694    5524 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:23:11.286700    5524 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:23:11.286757    5524 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/flannel-134000/config.json ...
	I0904 13:23:11.286770    5524 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/flannel-134000/config.json: {Name:mka78e902254914131996f34a3784453c562fc81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:23:11.287078    5524 start.go:360] acquireMachinesLock for flannel-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:23:11.287115    5524 start.go:364] duration metric: took 30.459µs to acquireMachinesLock for "flannel-134000"
	I0904 13:23:11.287126    5524 start.go:93] Provisioning new machine with config: &{Name:flannel-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-134000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:23:11.287164    5524 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:23:11.290685    5524 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:23:11.305623    5524 start.go:159] libmachine.API.Create for "flannel-134000" (driver="qemu2")
	I0904 13:23:11.305648    5524 client.go:168] LocalClient.Create starting
	I0904 13:23:11.305707    5524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:23:11.305738    5524 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:11.305747    5524 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:11.305786    5524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:23:11.305809    5524 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:11.305817    5524 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:11.306175    5524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:23:11.477468    5524 main.go:141] libmachine: Creating SSH key...
	I0904 13:23:11.523250    5524 main.go:141] libmachine: Creating Disk image...
	I0904 13:23:11.523257    5524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:23:11.523461    5524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/disk.qcow2
	I0904 13:23:11.532764    5524 main.go:141] libmachine: STDOUT: 
	I0904 13:23:11.532789    5524 main.go:141] libmachine: STDERR: 
	I0904 13:23:11.532850    5524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/disk.qcow2 +20000M
	I0904 13:23:11.540887    5524 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:23:11.540902    5524 main.go:141] libmachine: STDERR: 
	I0904 13:23:11.540918    5524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/disk.qcow2
	I0904 13:23:11.540924    5524 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:23:11.540936    5524 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:23:11.540966    5524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:e0:49:83:8f:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/disk.qcow2
	I0904 13:23:11.542666    5524 main.go:141] libmachine: STDOUT: 
	I0904 13:23:11.542681    5524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:23:11.542702    5524 client.go:171] duration metric: took 237.053875ms to LocalClient.Create
	I0904 13:23:13.544900    5524 start.go:128] duration metric: took 2.257737584s to createHost
	I0904 13:23:13.545003    5524 start.go:83] releasing machines lock for "flannel-134000", held for 2.257916417s
	W0904 13:23:13.545098    5524 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:23:13.558392    5524 out.go:177] * Deleting "flannel-134000" in qemu2 ...
	W0904 13:23:13.591329    5524 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:23:13.591361    5524 start.go:729] Will try again in 5 seconds ...
	I0904 13:23:18.593562    5524 start.go:360] acquireMachinesLock for flannel-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:23:18.594216    5524 start.go:364] duration metric: took 467.209µs to acquireMachinesLock for "flannel-134000"
	I0904 13:23:18.594379    5524 start.go:93] Provisioning new machine with config: &{Name:flannel-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-134000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:23:18.594704    5524 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:23:18.604392    5524 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:23:18.656775    5524 start.go:159] libmachine.API.Create for "flannel-134000" (driver="qemu2")
	I0904 13:23:18.656848    5524 client.go:168] LocalClient.Create starting
	I0904 13:23:18.656973    5524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:23:18.657045    5524 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:18.657061    5524 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:18.657127    5524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:23:18.657172    5524 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:18.657187    5524 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:18.657739    5524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:23:18.826511    5524 main.go:141] libmachine: Creating SSH key...
	I0904 13:23:18.961883    5524 main.go:141] libmachine: Creating Disk image...
	I0904 13:23:18.961894    5524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:23:18.962099    5524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/disk.qcow2
	I0904 13:23:18.971605    5524 main.go:141] libmachine: STDOUT: 
	I0904 13:23:18.971629    5524 main.go:141] libmachine: STDERR: 
	I0904 13:23:18.971682    5524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/disk.qcow2 +20000M
	I0904 13:23:18.979611    5524 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:23:18.979628    5524 main.go:141] libmachine: STDERR: 
	I0904 13:23:18.979646    5524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/disk.qcow2
	I0904 13:23:18.979651    5524 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:23:18.979662    5524 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:23:18.979695    5524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:30:95:7b:0f:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/flannel-134000/disk.qcow2
	I0904 13:23:18.981344    5524 main.go:141] libmachine: STDOUT: 
	I0904 13:23:18.981360    5524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:23:18.981373    5524 client.go:171] duration metric: took 324.524ms to LocalClient.Create
	I0904 13:23:20.983468    5524 start.go:128] duration metric: took 2.388787s to createHost
	I0904 13:23:20.983517    5524 start.go:83] releasing machines lock for "flannel-134000", held for 2.389320958s
	W0904 13:23:20.983600    5524 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:23:20.997069    5524 out.go:201] 
	W0904 13:23:20.999892    5524 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:23:20.999902    5524 out.go:270] * 
	* 
	W0904 13:23:21.000430    5524 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:23:21.011887    5524 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
E0904 13:23:27.772215    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.8998275s)

                                                
                                                
-- stdout --
	* [enable-default-cni-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-134000" primary control-plane node in "enable-default-cni-134000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-134000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:23:23.373265    5645 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:23:23.373388    5645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:23:23.373391    5645 out.go:358] Setting ErrFile to fd 2...
	I0904 13:23:23.373394    5645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:23:23.373528    5645 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:23:23.374587    5645 out.go:352] Setting JSON to false
	I0904 13:23:23.390879    5645 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4967,"bootTime":1725476436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:23:23.390948    5645 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:23:23.396933    5645 out.go:177] * [enable-default-cni-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:23:23.404921    5645 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:23:23.404981    5645 notify.go:220] Checking for updates...
	I0904 13:23:23.411860    5645 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:23:23.414920    5645 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:23:23.416377    5645 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:23:23.419821    5645 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:23:23.422854    5645 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:23:23.426159    5645 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:23:23.426225    5645 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:23:23.426267    5645 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:23:23.430847    5645 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:23:23.437888    5645 start.go:297] selected driver: qemu2
	I0904 13:23:23.437896    5645 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:23:23.437905    5645 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:23:23.440330    5645 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:23:23.442935    5645 out.go:177] * Automatically selected the socket_vmnet network
	E0904 13:23:23.445921    5645 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0904 13:23:23.445933    5645 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:23:23.445960    5645 cni.go:84] Creating CNI manager for "bridge"
	I0904 13:23:23.445968    5645 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 13:23:23.445992    5645 start.go:340] cluster config:
	{Name:enable-default-cni-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:23:23.449752    5645 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:23:23.457863    5645 out.go:177] * Starting "enable-default-cni-134000" primary control-plane node in "enable-default-cni-134000" cluster
	I0904 13:23:23.461799    5645 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:23:23.461813    5645 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:23:23.461819    5645 cache.go:56] Caching tarball of preloaded images
	I0904 13:23:23.461876    5645 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:23:23.461882    5645 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:23:23.461943    5645 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/enable-default-cni-134000/config.json ...
	I0904 13:23:23.461956    5645 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/enable-default-cni-134000/config.json: {Name:mkdcf0722bf65e30aa27bfd3554b720e6d7e0418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:23:23.462373    5645 start.go:360] acquireMachinesLock for enable-default-cni-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:23:23.462407    5645 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "enable-default-cni-134000"
	I0904 13:23:23.462419    5645 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-d
efault-cni-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:23:23.462455    5645 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:23:23.470900    5645 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:23:23.487772    5645 start.go:159] libmachine.API.Create for "enable-default-cni-134000" (driver="qemu2")
	I0904 13:23:23.487797    5645 client.go:168] LocalClient.Create starting
	I0904 13:23:23.487855    5645 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:23:23.487886    5645 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:23.487899    5645 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:23.487941    5645 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:23:23.487963    5645 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:23.487972    5645 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:23.488342    5645 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:23:23.646684    5645 main.go:141] libmachine: Creating SSH key...
	I0904 13:23:23.766214    5645 main.go:141] libmachine: Creating Disk image...
	I0904 13:23:23.766221    5645 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:23:23.766424    5645 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/disk.qcow2
	I0904 13:23:23.775937    5645 main.go:141] libmachine: STDOUT: 
	I0904 13:23:23.775969    5645 main.go:141] libmachine: STDERR: 
	I0904 13:23:23.776028    5645 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/disk.qcow2 +20000M
	I0904 13:23:23.784173    5645 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:23:23.784187    5645 main.go:141] libmachine: STDERR: 
	I0904 13:23:23.784203    5645 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/disk.qcow2
	I0904 13:23:23.784210    5645 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:23:23.784223    5645 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:23:23.784248    5645 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:2e:b7:c3:5d:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/disk.qcow2
	I0904 13:23:23.785921    5645 main.go:141] libmachine: STDOUT: 
	I0904 13:23:23.785936    5645 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:23:23.785959    5645 client.go:171] duration metric: took 298.162208ms to LocalClient.Create
	I0904 13:23:25.788128    5645 start.go:128] duration metric: took 2.32568175s to createHost
	I0904 13:23:25.788196    5645 start.go:83] releasing machines lock for "enable-default-cni-134000", held for 2.32582025s
	W0904 13:23:25.788264    5645 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:23:25.803123    5645 out.go:177] * Deleting "enable-default-cni-134000" in qemu2 ...
	W0904 13:23:25.831896    5645 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:23:25.831923    5645 start.go:729] Will try again in 5 seconds ...
	I0904 13:23:30.834243    5645 start.go:360] acquireMachinesLock for enable-default-cni-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:23:30.834846    5645 start.go:364] duration metric: took 474.166µs to acquireMachinesLock for "enable-default-cni-134000"
	I0904 13:23:30.835020    5645 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-d
efault-cni-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:23:30.835332    5645 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:23:30.844047    5645 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:23:30.891124    5645 start.go:159] libmachine.API.Create for "enable-default-cni-134000" (driver="qemu2")
	I0904 13:23:30.891175    5645 client.go:168] LocalClient.Create starting
	I0904 13:23:30.891291    5645 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:23:30.891357    5645 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:30.891374    5645 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:30.891435    5645 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:23:30.891480    5645 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:30.891492    5645 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:30.892063    5645 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:23:31.059149    5645 main.go:141] libmachine: Creating SSH key...
	I0904 13:23:31.181600    5645 main.go:141] libmachine: Creating Disk image...
	I0904 13:23:31.181609    5645 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:23:31.181825    5645 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/disk.qcow2
	I0904 13:23:31.191395    5645 main.go:141] libmachine: STDOUT: 
	I0904 13:23:31.191412    5645 main.go:141] libmachine: STDERR: 
	I0904 13:23:31.191475    5645 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/disk.qcow2 +20000M
	I0904 13:23:31.200136    5645 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:23:31.200158    5645 main.go:141] libmachine: STDERR: 
	I0904 13:23:31.200192    5645 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/disk.qcow2
	I0904 13:23:31.200197    5645 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:23:31.200211    5645 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:23:31.200252    5645 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:8c:a7:72:ed:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/enable-default-cni-134000/disk.qcow2
	I0904 13:23:31.202021    5645 main.go:141] libmachine: STDOUT: 
	I0904 13:23:31.202037    5645 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:23:31.202049    5645 client.go:171] duration metric: took 310.870458ms to LocalClient.Create
	I0904 13:23:33.204196    5645 start.go:128] duration metric: took 2.368857084s to createHost
	I0904 13:23:33.204249    5645 start.go:83] releasing machines lock for "enable-default-cni-134000", held for 2.369418584s
	W0904 13:23:33.204616    5645 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:23:33.214220    5645 out.go:201] 
	W0904 13:23:33.218255    5645 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:23:33.218297    5645 out.go:270] * 
	* 
	W0904 13:23:33.220369    5645 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:23:33.230227    5645 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.896803667s)

                                                
                                                
-- stdout --
	* [bridge-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-134000" primary control-plane node in "bridge-134000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-134000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:23:35.446633    5754 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:23:35.446774    5754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:23:35.446778    5754 out.go:358] Setting ErrFile to fd 2...
	I0904 13:23:35.446780    5754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:23:35.446922    5754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:23:35.448104    5754 out.go:352] Setting JSON to false
	I0904 13:23:35.464776    5754 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4979,"bootTime":1725476436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:23:35.464868    5754 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:23:35.471936    5754 out.go:177] * [bridge-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:23:35.478897    5754 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:23:35.478959    5754 notify.go:220] Checking for updates...
	I0904 13:23:35.485918    5754 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:23:35.488832    5754 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:23:35.491828    5754 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:23:35.494816    5754 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:23:35.497862    5754 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:23:35.501176    5754 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:23:35.501245    5754 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:23:35.501296    5754 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:23:35.505830    5754 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:23:35.512815    5754 start.go:297] selected driver: qemu2
	I0904 13:23:35.512822    5754 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:23:35.512829    5754 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:23:35.515171    5754 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:23:35.518900    5754 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:23:35.521927    5754 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:23:35.521972    5754 cni.go:84] Creating CNI manager for "bridge"
	I0904 13:23:35.521977    5754 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 13:23:35.522020    5754 start.go:340] cluster config:
	{Name:bridge-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnet
Path:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:23:35.525775    5754 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:23:35.534822    5754 out.go:177] * Starting "bridge-134000" primary control-plane node in "bridge-134000" cluster
	I0904 13:23:35.548893    5754 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:23:35.548906    5754 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:23:35.548913    5754 cache.go:56] Caching tarball of preloaded images
	I0904 13:23:35.548971    5754 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:23:35.548977    5754 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:23:35.549028    5754 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/bridge-134000/config.json ...
	I0904 13:23:35.549041    5754 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/bridge-134000/config.json: {Name:mk55389df7f8c061b78c5c7bf8079d90a1df1b07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:23:35.549462    5754 start.go:360] acquireMachinesLock for bridge-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:23:35.549494    5754 start.go:364] duration metric: took 26.75µs to acquireMachinesLock for "bridge-134000"
	I0904 13:23:35.549505    5754 start.go:93] Provisioning new machine with config: &{Name:bridge-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-134000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:23:35.549538    5754 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:23:35.557846    5754 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:23:35.573588    5754 start.go:159] libmachine.API.Create for "bridge-134000" (driver="qemu2")
	I0904 13:23:35.573615    5754 client.go:168] LocalClient.Create starting
	I0904 13:23:35.573684    5754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:23:35.573713    5754 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:35.573722    5754 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:35.573766    5754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:23:35.573789    5754 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:35.573799    5754 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:35.574118    5754 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:23:35.733134    5754 main.go:141] libmachine: Creating SSH key...
	I0904 13:23:35.800755    5754 main.go:141] libmachine: Creating Disk image...
	I0904 13:23:35.800761    5754 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:23:35.800954    5754 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/disk.qcow2
	I0904 13:23:35.810370    5754 main.go:141] libmachine: STDOUT: 
	I0904 13:23:35.810389    5754 main.go:141] libmachine: STDERR: 
	I0904 13:23:35.810442    5754 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/disk.qcow2 +20000M
	I0904 13:23:35.818392    5754 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:23:35.818408    5754 main.go:141] libmachine: STDERR: 
	I0904 13:23:35.818421    5754 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/disk.qcow2
	I0904 13:23:35.818426    5754 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:23:35.818440    5754 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:23:35.818465    5754 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:55:50:87:96:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/disk.qcow2
	I0904 13:23:35.820108    5754 main.go:141] libmachine: STDOUT: 
	I0904 13:23:35.820125    5754 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:23:35.820146    5754 client.go:171] duration metric: took 246.526791ms to LocalClient.Create
	I0904 13:23:37.822403    5754 start.go:128] duration metric: took 2.272801625s to createHost
	I0904 13:23:37.822485    5754 start.go:83] releasing machines lock for "bridge-134000", held for 2.273018875s
	W0904 13:23:37.822537    5754 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:23:37.836553    5754 out.go:177] * Deleting "bridge-134000" in qemu2 ...
	W0904 13:23:37.866567    5754 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:23:37.866599    5754 start.go:729] Will try again in 5 seconds ...
	I0904 13:23:42.868724    5754 start.go:360] acquireMachinesLock for bridge-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:23:42.869300    5754 start.go:364] duration metric: took 446.209µs to acquireMachinesLock for "bridge-134000"
	I0904 13:23:42.869364    5754 start.go:93] Provisioning new machine with config: &{Name:bridge-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-134000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:23:42.869624    5754 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:23:42.878105    5754 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:23:42.928816    5754 start.go:159] libmachine.API.Create for "bridge-134000" (driver="qemu2")
	I0904 13:23:42.928885    5754 client.go:168] LocalClient.Create starting
	I0904 13:23:42.928995    5754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:23:42.929054    5754 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:42.929071    5754 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:42.929133    5754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:23:42.929177    5754 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:42.929210    5754 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:42.929776    5754 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:23:43.097784    5754 main.go:141] libmachine: Creating SSH key...
	I0904 13:23:43.249782    5754 main.go:141] libmachine: Creating Disk image...
	I0904 13:23:43.249792    5754 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:23:43.250028    5754 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/disk.qcow2
	I0904 13:23:43.259724    5754 main.go:141] libmachine: STDOUT: 
	I0904 13:23:43.259752    5754 main.go:141] libmachine: STDERR: 
	I0904 13:23:43.259809    5754 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/disk.qcow2 +20000M
	I0904 13:23:43.267787    5754 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:23:43.267807    5754 main.go:141] libmachine: STDERR: 
	I0904 13:23:43.267823    5754 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/disk.qcow2
	I0904 13:23:43.267829    5754 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:23:43.267839    5754 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:23:43.267868    5754 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:9b:5c:35:1d:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/bridge-134000/disk.qcow2
	I0904 13:23:43.269561    5754 main.go:141] libmachine: STDOUT: 
	I0904 13:23:43.269580    5754 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:23:43.269593    5754 client.go:171] duration metric: took 340.7075ms to LocalClient.Create
	I0904 13:23:45.271786    5754 start.go:128] duration metric: took 2.402161916s to createHost
	I0904 13:23:45.271883    5754 start.go:83] releasing machines lock for "bridge-134000", held for 2.402600375s
	W0904 13:23:45.272329    5754 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:23:45.288078    5754 out.go:201] 
	W0904 13:23:45.292086    5754 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:23:45.292119    5754 out.go:270] * 
	* 
	W0904 13:23:45.293730    5754 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:23:45.301851    5754 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
E0904 13:23:55.854723    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-134000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.948107167s)

                                                
                                                
-- stdout --
	* [kubenet-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-134000" primary control-plane node in "kubenet-134000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-134000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:23:47.475477    5868 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:23:47.475618    5868 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:23:47.475621    5868 out.go:358] Setting ErrFile to fd 2...
	I0904 13:23:47.475624    5868 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:23:47.475751    5868 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:23:47.476841    5868 out.go:352] Setting JSON to false
	I0904 13:23:47.493021    5868 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4991,"bootTime":1725476436,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:23:47.493090    5868 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:23:47.500520    5868 out.go:177] * [kubenet-134000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:23:47.508404    5868 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:23:47.508435    5868 notify.go:220] Checking for updates...
	I0904 13:23:47.514368    5868 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:23:47.517386    5868 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:23:47.518844    5868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:23:47.522381    5868 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:23:47.525386    5868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:23:47.528751    5868 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:23:47.528821    5868 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:23:47.528876    5868 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:23:47.533341    5868 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:23:47.540410    5868 start.go:297] selected driver: qemu2
	I0904 13:23:47.540417    5868 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:23:47.540423    5868 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:23:47.542641    5868 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:23:47.545300    5868 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:23:47.548437    5868 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:23:47.548495    5868 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0904 13:23:47.548537    5868 start.go:340] cluster config:
	{Name:kubenet-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnet
Path:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:23:47.552674    5868 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:23:47.561374    5868 out.go:177] * Starting "kubenet-134000" primary control-plane node in "kubenet-134000" cluster
	I0904 13:23:47.565383    5868 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:23:47.565401    5868 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:23:47.565409    5868 cache.go:56] Caching tarball of preloaded images
	I0904 13:23:47.565468    5868 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:23:47.565473    5868 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:23:47.565532    5868 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/kubenet-134000/config.json ...
	I0904 13:23:47.565544    5868 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/kubenet-134000/config.json: {Name:mk3e05792997288743d78187b50008de80d00419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:23:47.565990    5868 start.go:360] acquireMachinesLock for kubenet-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:23:47.566029    5868 start.go:364] duration metric: took 32.542µs to acquireMachinesLock for "kubenet-134000"
	I0904 13:23:47.566041    5868 start.go:93] Provisioning new machine with config: &{Name:kubenet-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-134000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:23:47.566073    5868 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:23:47.570347    5868 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:23:47.586255    5868 start.go:159] libmachine.API.Create for "kubenet-134000" (driver="qemu2")
	I0904 13:23:47.586285    5868 client.go:168] LocalClient.Create starting
	I0904 13:23:47.586352    5868 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:23:47.586389    5868 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:47.586400    5868 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:47.586438    5868 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:23:47.586461    5868 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:47.586468    5868 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:47.586858    5868 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:23:47.766212    5868 main.go:141] libmachine: Creating SSH key...
	I0904 13:23:47.970012    5868 main.go:141] libmachine: Creating Disk image...
	I0904 13:23:47.970032    5868 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:23:47.970279    5868 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/disk.qcow2
	I0904 13:23:47.980766    5868 main.go:141] libmachine: STDOUT: 
	I0904 13:23:47.980789    5868 main.go:141] libmachine: STDERR: 
	I0904 13:23:47.980860    5868 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/disk.qcow2 +20000M
	I0904 13:23:47.990404    5868 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:23:47.990424    5868 main.go:141] libmachine: STDERR: 
	I0904 13:23:47.990450    5868 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/disk.qcow2
	I0904 13:23:47.990455    5868 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:23:47.990476    5868 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:23:47.990515    5868 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:35:b6:b1:35:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/disk.qcow2
	I0904 13:23:47.992607    5868 main.go:141] libmachine: STDOUT: 
	I0904 13:23:47.992625    5868 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:23:47.992645    5868 client.go:171] duration metric: took 406.361584ms to LocalClient.Create
	I0904 13:23:49.994801    5868 start.go:128] duration metric: took 2.428741583s to createHost
	I0904 13:23:49.994863    5868 start.go:83] releasing machines lock for "kubenet-134000", held for 2.428867541s
	W0904 13:23:49.994927    5868 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:23:50.007715    5868 out.go:177] * Deleting "kubenet-134000" in qemu2 ...
	W0904 13:23:50.033559    5868 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:23:50.033584    5868 start.go:729] Will try again in 5 seconds ...
	I0904 13:23:55.035790    5868 start.go:360] acquireMachinesLock for kubenet-134000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:23:55.036189    5868 start.go:364] duration metric: took 302.084µs to acquireMachinesLock for "kubenet-134000"
	I0904 13:23:55.036312    5868 start.go:93] Provisioning new machine with config: &{Name:kubenet-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-134000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:23:55.036540    5868 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:23:55.047219    5868 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 13:23:55.095887    5868 start.go:159] libmachine.API.Create for "kubenet-134000" (driver="qemu2")
	I0904 13:23:55.095962    5868 client.go:168] LocalClient.Create starting
	I0904 13:23:55.096074    5868 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:23:55.096132    5868 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:55.096150    5868 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:55.096219    5868 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:23:55.096276    5868 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:55.096287    5868 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:55.096875    5868 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:23:55.265654    5868 main.go:141] libmachine: Creating SSH key...
	I0904 13:23:55.331898    5868 main.go:141] libmachine: Creating Disk image...
	I0904 13:23:55.331906    5868 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:23:55.332114    5868 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/disk.qcow2
	I0904 13:23:55.341544    5868 main.go:141] libmachine: STDOUT: 
	I0904 13:23:55.341566    5868 main.go:141] libmachine: STDERR: 
	I0904 13:23:55.341630    5868 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/disk.qcow2 +20000M
	I0904 13:23:55.349847    5868 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:23:55.349864    5868 main.go:141] libmachine: STDERR: 
	I0904 13:23:55.349877    5868 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/disk.qcow2
	I0904 13:23:55.349882    5868 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:23:55.349891    5868 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:23:55.349919    5868 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:27:32:5e:02:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/kubenet-134000/disk.qcow2
	I0904 13:23:55.351502    5868 main.go:141] libmachine: STDOUT: 
	I0904 13:23:55.351519    5868 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:23:55.351533    5868 client.go:171] duration metric: took 255.569375ms to LocalClient.Create
	I0904 13:23:57.352365    5868 start.go:128] duration metric: took 2.31580475s to createHost
	I0904 13:23:57.352445    5868 start.go:83] releasing machines lock for "kubenet-134000", held for 2.316267708s
	W0904 13:23:57.352957    5868 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:23:57.361593    5868 out.go:201] 
	W0904 13:23:57.368669    5868 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:23:57.368707    5868 out.go:270] * 
	* 
	W0904 13:23:57.371467    5868 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:23:57.380618    5868 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-455000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-455000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.910432667s)

                                                
                                                
-- stdout --
	* [old-k8s-version-455000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-455000" primary control-plane node in "old-k8s-version-455000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-455000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:23:59.575307    5982 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:23:59.575429    5982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:23:59.575432    5982 out.go:358] Setting ErrFile to fd 2...
	I0904 13:23:59.575434    5982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:23:59.575577    5982 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:23:59.576696    5982 out.go:352] Setting JSON to false
	I0904 13:23:59.592837    5982 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5003,"bootTime":1725476436,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:23:59.592924    5982 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:23:59.599494    5982 out.go:177] * [old-k8s-version-455000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:23:59.607491    5982 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:23:59.607552    5982 notify.go:220] Checking for updates...
	I0904 13:23:59.615533    5982 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:23:59.618501    5982 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:23:59.621522    5982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:23:59.624549    5982 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:23:59.627456    5982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:23:59.630827    5982 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:23:59.630891    5982 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:23:59.630928    5982 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:23:59.635458    5982 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:23:59.642490    5982 start.go:297] selected driver: qemu2
	I0904 13:23:59.642499    5982 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:23:59.642506    5982 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:23:59.644741    5982 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:23:59.648524    5982 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:23:59.651566    5982 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:23:59.651598    5982 cni.go:84] Creating CNI manager for ""
	I0904 13:23:59.651605    5982 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0904 13:23:59.651627    5982 start.go:340] cluster config:
	{Name:old-k8s-version-455000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Sock
etVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:23:59.655035    5982 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:23:59.663331    5982 out.go:177] * Starting "old-k8s-version-455000" primary control-plane node in "old-k8s-version-455000" cluster
	I0904 13:23:59.667436    5982 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0904 13:23:59.667448    5982 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0904 13:23:59.667455    5982 cache.go:56] Caching tarball of preloaded images
	I0904 13:23:59.667518    5982 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:23:59.667523    5982 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0904 13:23:59.667584    5982 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/old-k8s-version-455000/config.json ...
	I0904 13:23:59.667596    5982 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/old-k8s-version-455000/config.json: {Name:mk7fe7153f26bb9772616e068e352181f0cb4c7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:23:59.667803    5982 start.go:360] acquireMachinesLock for old-k8s-version-455000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:23:59.667838    5982 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "old-k8s-version-455000"
	I0904 13:23:59.667849    5982 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-455000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-vers
ion-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:23:59.667881    5982 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:23:59.675523    5982 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0904 13:23:59.690764    5982 start.go:159] libmachine.API.Create for "old-k8s-version-455000" (driver="qemu2")
	I0904 13:23:59.690782    5982 client.go:168] LocalClient.Create starting
	I0904 13:23:59.690844    5982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:23:59.690875    5982 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:59.690883    5982 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:59.690923    5982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:23:59.690945    5982 main.go:141] libmachine: Decoding PEM data...
	I0904 13:23:59.690954    5982 main.go:141] libmachine: Parsing certificate...
	I0904 13:23:59.691478    5982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:23:59.851891    5982 main.go:141] libmachine: Creating SSH key...
	I0904 13:24:00.055443    5982 main.go:141] libmachine: Creating Disk image...
	I0904 13:24:00.055455    5982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:24:00.055708    5982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/disk.qcow2
	I0904 13:24:00.065397    5982 main.go:141] libmachine: STDOUT: 
	I0904 13:24:00.065415    5982 main.go:141] libmachine: STDERR: 
	I0904 13:24:00.065473    5982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/disk.qcow2 +20000M
	I0904 13:24:00.073698    5982 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:24:00.073711    5982 main.go:141] libmachine: STDERR: 
	I0904 13:24:00.073730    5982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/disk.qcow2
	I0904 13:24:00.073735    5982 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:24:00.073748    5982 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:00.073783    5982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:01:02:1d:4a:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/disk.qcow2
	I0904 13:24:00.075440    5982 main.go:141] libmachine: STDOUT: 
	I0904 13:24:00.075453    5982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:00.075470    5982 client.go:171] duration metric: took 384.690458ms to LocalClient.Create
	I0904 13:24:02.077794    5982 start.go:128] duration metric: took 2.409897375s to createHost
	I0904 13:24:02.077906    5982 start.go:83] releasing machines lock for "old-k8s-version-455000", held for 2.410098583s
	W0904 13:24:02.077959    5982 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:02.089348    5982 out.go:177] * Deleting "old-k8s-version-455000" in qemu2 ...
	W0904 13:24:02.123117    5982 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:02.123155    5982 start.go:729] Will try again in 5 seconds ...
	I0904 13:24:07.125394    5982 start.go:360] acquireMachinesLock for old-k8s-version-455000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:24:07.125974    5982 start.go:364] duration metric: took 446.083µs to acquireMachinesLock for "old-k8s-version-455000"
	I0904 13:24:07.126133    5982 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-455000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-vers
ion-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:24:07.126478    5982 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:24:07.136111    5982 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0904 13:24:07.186359    5982 start.go:159] libmachine.API.Create for "old-k8s-version-455000" (driver="qemu2")
	I0904 13:24:07.186411    5982 client.go:168] LocalClient.Create starting
	I0904 13:24:07.186526    5982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:24:07.186599    5982 main.go:141] libmachine: Decoding PEM data...
	I0904 13:24:07.186618    5982 main.go:141] libmachine: Parsing certificate...
	I0904 13:24:07.186687    5982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:24:07.186731    5982 main.go:141] libmachine: Decoding PEM data...
	I0904 13:24:07.186746    5982 main.go:141] libmachine: Parsing certificate...
	I0904 13:24:07.187359    5982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:24:07.356529    5982 main.go:141] libmachine: Creating SSH key...
	I0904 13:24:07.396407    5982 main.go:141] libmachine: Creating Disk image...
	I0904 13:24:07.396415    5982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:24:07.396662    5982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/disk.qcow2
	I0904 13:24:07.406159    5982 main.go:141] libmachine: STDOUT: 
	I0904 13:24:07.406178    5982 main.go:141] libmachine: STDERR: 
	I0904 13:24:07.406240    5982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/disk.qcow2 +20000M
	I0904 13:24:07.414316    5982 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:24:07.414333    5982 main.go:141] libmachine: STDERR: 
	I0904 13:24:07.414345    5982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/disk.qcow2
	I0904 13:24:07.414350    5982 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:24:07.414363    5982 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:07.414388    5982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:e9:05:a1:0a:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/disk.qcow2
	I0904 13:24:07.416039    5982 main.go:141] libmachine: STDOUT: 
	I0904 13:24:07.416058    5982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:07.416082    5982 client.go:171] duration metric: took 229.658625ms to LocalClient.Create
	I0904 13:24:09.418258    5982 start.go:128] duration metric: took 2.291778291s to createHost
	I0904 13:24:09.418429    5982 start.go:83] releasing machines lock for "old-k8s-version-455000", held for 2.292431292s
	W0904 13:24:09.418787    5982 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-455000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-455000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:09.427238    5982 out.go:201] 
	W0904 13:24:09.434364    5982 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:24:09.434429    5982 out.go:270] * 
	* 
	W0904 13:24:09.436842    5982 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:24:09.447233    5982 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-455000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000: exit status 7 (59.92825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-455000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-455000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-455000 create -f testdata/busybox.yaml: exit status 1 (28.914167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-455000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-455000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000: exit status 7 (30.216292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-455000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000: exit status 7 (30.041459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-455000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-455000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-455000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-455000 describe deploy/metrics-server -n kube-system: exit status 1 (26.775667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-455000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-455000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000: exit status 7 (30.043583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-455000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-455000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-455000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.185239209s)

                                                
                                                
-- stdout --
	* [old-k8s-version-455000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-455000" primary control-plane node in "old-k8s-version-455000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-455000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-455000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:24:13.179039    6037 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:24:13.179153    6037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:13.179156    6037 out.go:358] Setting ErrFile to fd 2...
	I0904 13:24:13.179158    6037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:13.179319    6037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:24:13.180300    6037 out.go:352] Setting JSON to false
	I0904 13:24:13.197018    6037 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5017,"bootTime":1725476436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:24:13.197108    6037 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:24:13.201715    6037 out.go:177] * [old-k8s-version-455000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:24:13.207740    6037 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:24:13.207775    6037 notify.go:220] Checking for updates...
	I0904 13:24:13.214651    6037 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:24:13.217624    6037 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:24:13.220688    6037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:24:13.223695    6037 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:24:13.225114    6037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:24:13.228954    6037 config.go:182] Loaded profile config "old-k8s-version-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0904 13:24:13.232675    6037 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0904 13:24:13.235693    6037 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:24:13.239664    6037 out.go:177] * Using the qemu2 driver based on existing profile
	I0904 13:24:13.246703    6037 start.go:297] selected driver: qemu2
	I0904 13:24:13.246708    6037 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-455000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version
-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/U
sers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:24:13.246759    6037 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:24:13.249009    6037 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:24:13.249046    6037 cni.go:84] Creating CNI manager for ""
	I0904 13:24:13.249052    6037 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0904 13:24:13.249073    6037 start.go:340] cluster config:
	{Name:old-k8s-version-455000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:24:13.252708    6037 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:13.261717    6037 out.go:177] * Starting "old-k8s-version-455000" primary control-plane node in "old-k8s-version-455000" cluster
	I0904 13:24:13.265691    6037 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0904 13:24:13.265706    6037 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0904 13:24:13.265715    6037 cache.go:56] Caching tarball of preloaded images
	I0904 13:24:13.265788    6037 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:24:13.265795    6037 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0904 13:24:13.265862    6037 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/old-k8s-version-455000/config.json ...
	I0904 13:24:13.266256    6037 start.go:360] acquireMachinesLock for old-k8s-version-455000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:24:13.266282    6037 start.go:364] duration metric: took 21.417µs to acquireMachinesLock for "old-k8s-version-455000"
	I0904 13:24:13.266292    6037 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:24:13.266296    6037 fix.go:54] fixHost starting: 
	I0904 13:24:13.266400    6037 fix.go:112] recreateIfNeeded on old-k8s-version-455000: state=Stopped err=<nil>
	W0904 13:24:13.266409    6037 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:24:13.274616    6037 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-455000" ...
	I0904 13:24:13.278657    6037 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:13.278687    6037 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:e9:05:a1:0a:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/disk.qcow2
	I0904 13:24:13.280498    6037 main.go:141] libmachine: STDOUT: 
	I0904 13:24:13.280518    6037 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:13.280544    6037 fix.go:56] duration metric: took 14.247541ms for fixHost
	I0904 13:24:13.280547    6037 start.go:83] releasing machines lock for "old-k8s-version-455000", held for 14.2605ms
	W0904 13:24:13.280553    6037 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:24:13.280587    6037 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:13.280591    6037 start.go:729] Will try again in 5 seconds ...
	I0904 13:24:18.282713    6037 start.go:360] acquireMachinesLock for old-k8s-version-455000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:24:18.283101    6037 start.go:364] duration metric: took 305.917µs to acquireMachinesLock for "old-k8s-version-455000"
	I0904 13:24:18.283294    6037 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:24:18.283310    6037 fix.go:54] fixHost starting: 
	I0904 13:24:18.283784    6037 fix.go:112] recreateIfNeeded on old-k8s-version-455000: state=Stopped err=<nil>
	W0904 13:24:18.283803    6037 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:24:18.288060    6037 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-455000" ...
	I0904 13:24:18.293182    6037 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:18.293363    6037 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:e9:05:a1:0a:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/old-k8s-version-455000/disk.qcow2
	I0904 13:24:18.300913    6037 main.go:141] libmachine: STDOUT: 
	I0904 13:24:18.300977    6037 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:18.301029    6037 fix.go:56] duration metric: took 17.719083ms for fixHost
	I0904 13:24:18.301042    6037 start.go:83] releasing machines lock for "old-k8s-version-455000", held for 17.831083ms
	W0904 13:24:18.301199    6037 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-455000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-455000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:18.309185    6037 out.go:201] 
	W0904 13:24:18.313325    6037 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:24:18.313369    6037 out.go:270] * 
	* 
	W0904 13:24:18.315069    6037 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:24:18.323127    6037 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-455000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000: exit status 7 (58.763541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-455000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-455000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000: exit status 7 (31.500917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-455000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-455000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-455000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-455000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.489166ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-455000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-455000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000: exit status 7 (29.590541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-455000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-455000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000: exit status 7 (30.742834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-455000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-455000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-455000 --alsologtostderr -v=1: exit status 83 (44.803125ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-455000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-455000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:24:18.588566    6056 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:24:18.589934    6056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:18.589937    6056 out.go:358] Setting ErrFile to fd 2...
	I0904 13:24:18.589940    6056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:18.590103    6056 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:24:18.590313    6056 out.go:352] Setting JSON to false
	I0904 13:24:18.590322    6056 mustload.go:65] Loading cluster: old-k8s-version-455000
	I0904 13:24:18.590510    6056 config.go:182] Loaded profile config "old-k8s-version-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0904 13:24:18.594810    6056 out.go:177] * The control-plane node old-k8s-version-455000 host is not running: state=Stopped
	I0904 13:24:18.597869    6056 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-455000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-455000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000: exit status 7 (29.799334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-455000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000: exit status 7 (29.970208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-455000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-393000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-393000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.774826875s)

                                                
                                                
-- stdout --
	* [no-preload-393000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-393000" primary control-plane node in "no-preload-393000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-393000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:24:18.911918    6073 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:24:18.912043    6073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:18.912046    6073 out.go:358] Setting ErrFile to fd 2...
	I0904 13:24:18.912048    6073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:18.912159    6073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:24:18.913198    6073 out.go:352] Setting JSON to false
	I0904 13:24:18.929368    6073 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5022,"bootTime":1725476436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:24:18.929444    6073 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:24:18.934288    6073 out.go:177] * [no-preload-393000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:24:18.941290    6073 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:24:18.941363    6073 notify.go:220] Checking for updates...
	I0904 13:24:18.947285    6073 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:24:18.950203    6073 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:24:18.953222    6073 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:24:18.956261    6073 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:24:18.959193    6073 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:24:18.962510    6073 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:24:18.962566    6073 config.go:182] Loaded profile config "stopped-upgrade-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0904 13:24:18.962612    6073 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:24:18.967297    6073 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:24:18.974247    6073 start.go:297] selected driver: qemu2
	I0904 13:24:18.974255    6073 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:24:18.974262    6073 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:24:18.976306    6073 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:24:18.979292    6073 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:24:18.982253    6073 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:24:18.982306    6073 cni.go:84] Creating CNI manager for ""
	I0904 13:24:18.982314    6073 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:24:18.982318    6073 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 13:24:18.982361    6073 start.go:340] cluster config:
	{Name:no-preload-393000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-393000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMne
tPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:24:18.985786    6073 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:18.994249    6073 out.go:177] * Starting "no-preload-393000" primary control-plane node in "no-preload-393000" cluster
	I0904 13:24:18.998197    6073 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:24:18.998293    6073 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/no-preload-393000/config.json ...
	I0904 13:24:18.998310    6073 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/no-preload-393000/config.json: {Name:mk3a6a7c628d4a0a17fcdc6f287c6c617302704b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:24:18.998308    6073 cache.go:107] acquiring lock: {Name:mkd1fa8a10c4c3e5d814e251a967a29368832fc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:18.998306    6073 cache.go:107] acquiring lock: {Name:mke7673cbb637f529eb0bd7b023262431ef9b0f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:18.998330    6073 cache.go:107] acquiring lock: {Name:mka745e58423c8c9cb2e37040652b35cce8dbef8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:18.998362    6073 cache.go:115] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0904 13:24:18.998368    6073 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 61.5µs
	I0904 13:24:18.998379    6073 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0904 13:24:18.998385    6073 cache.go:107] acquiring lock: {Name:mk76479931206d1e13841d0a1c5c24b5788603fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:18.998301    6073 cache.go:107] acquiring lock: {Name:mk9c7704e1048ecef824a70ac2368ac933c319f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:18.998455    6073 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0904 13:24:18.998463    6073 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0904 13:24:18.998473    6073 cache.go:107] acquiring lock: {Name:mkc4ab36e5da2dbc352f8f45fd0737d887e07e59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:18.998519    6073 cache.go:107] acquiring lock: {Name:mkdecab981a40d4b01abc5d5184818277c9ca578 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:18.998509    6073 cache.go:107] acquiring lock: {Name:mkaf095575f501a5dca70e27655087ec70ef9110 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:18.998550    6073 start.go:360] acquireMachinesLock for no-preload-393000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:24:18.998586    6073 start.go:364] duration metric: took 30.417µs to acquireMachinesLock for "no-preload-393000"
	I0904 13:24:18.998604    6073 start.go:93] Provisioning new machine with config: &{Name:no-preload-393000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-39300
0 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:24:18.998635    6073 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:24:18.998766    6073 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0904 13:24:18.998783    6073 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0904 13:24:18.998823    6073 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0904 13:24:18.999157    6073 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0904 13:24:19.003639    6073 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0904 13:24:19.007235    6073 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0904 13:24:19.011429    6073 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0904 13:24:19.011774    6073 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0904 13:24:19.012110    6073 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0904 13:24:19.013381    6073 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0904 13:24:19.013435    6073 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0904 13:24:19.013502    6073 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0904 13:24:19.013707    6073 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0904 13:24:19.023130    6073 start.go:159] libmachine.API.Create for "no-preload-393000" (driver="qemu2")
	I0904 13:24:19.023150    6073 client.go:168] LocalClient.Create starting
	I0904 13:24:19.023226    6073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:24:19.023256    6073 main.go:141] libmachine: Decoding PEM data...
	I0904 13:24:19.023263    6073 main.go:141] libmachine: Parsing certificate...
	I0904 13:24:19.023298    6073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:24:19.023321    6073 main.go:141] libmachine: Decoding PEM data...
	I0904 13:24:19.023332    6073 main.go:141] libmachine: Parsing certificate...
	I0904 13:24:19.023630    6073 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:24:19.187834    6073 main.go:141] libmachine: Creating SSH key...
	I0904 13:24:19.260249    6073 main.go:141] libmachine: Creating Disk image...
	I0904 13:24:19.260270    6073 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:24:19.260486    6073 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/disk.qcow2
	I0904 13:24:19.270496    6073 main.go:141] libmachine: STDOUT: 
	I0904 13:24:19.270532    6073 main.go:141] libmachine: STDERR: 
	I0904 13:24:19.270584    6073 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/disk.qcow2 +20000M
	I0904 13:24:19.279867    6073 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:24:19.279887    6073 main.go:141] libmachine: STDERR: 
	I0904 13:24:19.279905    6073 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/disk.qcow2
	I0904 13:24:19.279910    6073 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:24:19.279925    6073 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:19.279950    6073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:cb:5d:c7:ab:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/disk.qcow2
	I0904 13:24:19.282063    6073 main.go:141] libmachine: STDOUT: 
	I0904 13:24:19.282079    6073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:19.282099    6073 client.go:171] duration metric: took 258.949ms to LocalClient.Create
	I0904 13:24:19.417667    6073 cache.go:162] opening:  /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0904 13:24:19.423246    6073 cache.go:162] opening:  /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0904 13:24:19.464618    6073 cache.go:162] opening:  /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0904 13:24:19.476229    6073 cache.go:162] opening:  /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0904 13:24:19.504189    6073 cache.go:162] opening:  /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0904 13:24:19.524658    6073 cache.go:162] opening:  /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0904 13:24:19.561264    6073 cache.go:162] opening:  /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0904 13:24:19.604870    6073 cache.go:157] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0904 13:24:19.604884    6073 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 606.563167ms
	I0904 13:24:19.604899    6073 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0904 13:24:21.282140    6073 start.go:128] duration metric: took 2.283534375s to createHost
	I0904 13:24:21.282152    6073 start.go:83] releasing machines lock for "no-preload-393000", held for 2.283598791s
	W0904 13:24:21.282163    6073 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:21.291579    6073 out.go:177] * Deleting "no-preload-393000" in qemu2 ...
	W0904 13:24:21.302124    6073 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:21.302132    6073 start.go:729] Will try again in 5 seconds ...
	I0904 13:24:22.599397    6073 cache.go:157] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0904 13:24:22.599420    6073 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 3.601057958s
	I0904 13:24:22.599433    6073 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0904 13:24:23.041796    6073 cache.go:157] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0904 13:24:23.041816    6073 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.043434834s
	I0904 13:24:23.041827    6073 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0904 13:24:23.619749    6073 cache.go:157] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0904 13:24:23.619791    6073 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 4.621566208s
	I0904 13:24:23.619810    6073 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0904 13:24:23.671586    6073 cache.go:157] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0904 13:24:23.671623    6073 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 4.673407083s
	I0904 13:24:23.671649    6073 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0904 13:24:23.947340    6073 cache.go:157] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0904 13:24:23.947373    6073 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 4.948935083s
	I0904 13:24:23.947388    6073 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0904 13:24:26.302229    6073 start.go:360] acquireMachinesLock for no-preload-393000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:24:26.302456    6073 start.go:364] duration metric: took 189.375µs to acquireMachinesLock for "no-preload-393000"
	I0904 13:24:26.302529    6073 start.go:93] Provisioning new machine with config: &{Name:no-preload-393000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-39300
0 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:24:26.302643    6073 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:24:26.313103    6073 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0904 13:24:26.349618    6073 start.go:159] libmachine.API.Create for "no-preload-393000" (driver="qemu2")
	I0904 13:24:26.349671    6073 client.go:168] LocalClient.Create starting
	I0904 13:24:26.349779    6073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:24:26.349838    6073 main.go:141] libmachine: Decoding PEM data...
	I0904 13:24:26.349854    6073 main.go:141] libmachine: Parsing certificate...
	I0904 13:24:26.349928    6073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:24:26.349979    6073 main.go:141] libmachine: Decoding PEM data...
	I0904 13:24:26.349994    6073 main.go:141] libmachine: Parsing certificate...
	I0904 13:24:26.350476    6073 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:24:26.517184    6073 main.go:141] libmachine: Creating SSH key...
	I0904 13:24:26.602047    6073 main.go:141] libmachine: Creating Disk image...
	I0904 13:24:26.602053    6073 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:24:26.602324    6073 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/disk.qcow2
	I0904 13:24:26.611863    6073 main.go:141] libmachine: STDOUT: 
	I0904 13:24:26.611886    6073 main.go:141] libmachine: STDERR: 
	I0904 13:24:26.611949    6073 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/disk.qcow2 +20000M
	I0904 13:24:26.620206    6073 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:24:26.620232    6073 main.go:141] libmachine: STDERR: 
	I0904 13:24:26.620246    6073 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/disk.qcow2
	I0904 13:24:26.620257    6073 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:24:26.620263    6073 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:26.620300    6073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:70:bb:62:fa:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/disk.qcow2
	I0904 13:24:26.622127    6073 main.go:141] libmachine: STDOUT: 
	I0904 13:24:26.622149    6073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:26.622166    6073 client.go:171] duration metric: took 272.493459ms to LocalClient.Create
	I0904 13:24:28.167545    6073 cache.go:157] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0904 13:24:28.167601    6073 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 9.169367875s
	I0904 13:24:28.167617    6073 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0904 13:24:28.167655    6073 cache.go:87] Successfully saved all images to host disk.
	I0904 13:24:28.624280    6073 start.go:128] duration metric: took 2.321630042s to createHost
	I0904 13:24:28.624329    6073 start.go:83] releasing machines lock for "no-preload-393000", held for 2.321898875s
	W0904 13:24:28.624463    6073 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-393000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-393000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:28.633804    6073 out.go:201] 
	W0904 13:24:28.637705    6073 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:24:28.637731    6073 out.go:270] * 
	* 
	W0904 13:24:28.638406    6073 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:24:28.649677    6073 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-393000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000: exit status 7 (31.705875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-393000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-393000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-393000 create -f testdata/busybox.yaml: exit status 1 (26.763042ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-393000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-393000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000: exit status 7 (29.539958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-393000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000: exit status 7 (29.577792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-393000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-393000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-393000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-393000 describe deploy/metrics-server -n kube-system: exit status 1 (27.237375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-393000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-393000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000: exit status 7 (30.445791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-393000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-393000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-393000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (6.973525292s)

                                                
                                                
-- stdout --
	* [no-preload-393000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-393000" primary control-plane node in "no-preload-393000" cluster
	* Restarting existing qemu2 VM for "no-preload-393000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-393000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:24:32.346071    6155 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:24:32.346189    6155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:32.346192    6155 out.go:358] Setting ErrFile to fd 2...
	I0904 13:24:32.346195    6155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:32.346338    6155 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:24:32.347397    6155 out.go:352] Setting JSON to false
	I0904 13:24:32.364060    6155 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5036,"bootTime":1725476436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:24:32.364138    6155 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:24:32.369115    6155 out.go:177] * [no-preload-393000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:24:32.377082    6155 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:24:32.377168    6155 notify.go:220] Checking for updates...
	I0904 13:24:32.383084    6155 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:24:32.385984    6155 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:24:32.389083    6155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:24:32.392127    6155 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:24:32.395065    6155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:24:32.398367    6155 config.go:182] Loaded profile config "no-preload-393000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:24:32.398622    6155 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:24:32.403114    6155 out.go:177] * Using the qemu2 driver based on existing profile
	I0904 13:24:32.409988    6155 start.go:297] selected driver: qemu2
	I0904 13:24:32.409996    6155 start.go:901] validating driver "qemu2" against &{Name:no-preload-393000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-393000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:24:32.410053    6155 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:24:32.412439    6155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:24:32.412466    6155 cni.go:84] Creating CNI manager for ""
	I0904 13:24:32.412474    6155 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:24:32.412507    6155 start.go:340] cluster config:
	{Name:no-preload-393000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-393000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:24:32.416043    6155 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:32.425043    6155 out.go:177] * Starting "no-preload-393000" primary control-plane node in "no-preload-393000" cluster
	I0904 13:24:32.429012    6155 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:24:32.429086    6155 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/no-preload-393000/config.json ...
	I0904 13:24:32.429117    6155 cache.go:107] acquiring lock: {Name:mkaf095575f501a5dca70e27655087ec70ef9110 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:32.429112    6155 cache.go:107] acquiring lock: {Name:mkd1fa8a10c4c3e5d814e251a967a29368832fc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:32.429133    6155 cache.go:107] acquiring lock: {Name:mk9c7704e1048ecef824a70ac2368ac933c319f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:32.429183    6155 cache.go:115] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0904 13:24:32.429185    6155 cache.go:115] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0904 13:24:32.429189    6155 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 74.375µs
	I0904 13:24:32.429189    6155 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 80.042µs
	I0904 13:24:32.429195    6155 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0904 13:24:32.429197    6155 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0904 13:24:32.429197    6155 cache.go:115] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0904 13:24:32.429202    6155 cache.go:107] acquiring lock: {Name:mke7673cbb637f529eb0bd7b023262431ef9b0f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:32.429204    6155 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 84.917µs
	I0904 13:24:32.429205    6155 cache.go:107] acquiring lock: {Name:mkdecab981a40d4b01abc5d5184818277c9ca578 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:32.429216    6155 cache.go:107] acquiring lock: {Name:mkc4ab36e5da2dbc352f8f45fd0737d887e07e59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:32.429243    6155 cache.go:115] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0904 13:24:32.429304    6155 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 90µs
	I0904 13:24:32.429313    6155 cache.go:115] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0904 13:24:32.429317    6155 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0904 13:24:32.429245    6155 cache.go:115] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0904 13:24:32.429314    6155 cache.go:107] acquiring lock: {Name:mk76479931206d1e13841d0a1c5c24b5788603fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:32.429324    6155 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 123.125µs
	I0904 13:24:32.429340    6155 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0904 13:24:32.429248    6155 cache.go:107] acquiring lock: {Name:mka745e58423c8c9cb2e37040652b35cce8dbef8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:32.429210    6155 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0904 13:24:32.429319    6155 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 103.5µs
	I0904 13:24:32.429358    6155 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0904 13:24:32.429367    6155 cache.go:115] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0904 13:24:32.429371    6155 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 72.625µs
	I0904 13:24:32.429380    6155 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0904 13:24:32.429385    6155 cache.go:115] /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0904 13:24:32.429388    6155 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 141µs
	I0904 13:24:32.429392    6155 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0904 13:24:32.429396    6155 cache.go:87] Successfully saved all images to host disk.
	I0904 13:24:32.429522    6155 start.go:360] acquireMachinesLock for no-preload-393000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:24:32.429551    6155 start.go:364] duration metric: took 23.791µs to acquireMachinesLock for "no-preload-393000"
	I0904 13:24:32.429561    6155 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:24:32.429568    6155 fix.go:54] fixHost starting: 
	I0904 13:24:32.429682    6155 fix.go:112] recreateIfNeeded on no-preload-393000: state=Stopped err=<nil>
	W0904 13:24:32.429691    6155 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:24:32.438011    6155 out.go:177] * Restarting existing qemu2 VM for "no-preload-393000" ...
	I0904 13:24:32.440944    6155 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:32.440976    6155 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:70:bb:62:fa:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/disk.qcow2
	I0904 13:24:32.442936    6155 main.go:141] libmachine: STDOUT: 
	I0904 13:24:32.442956    6155 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:32.442978    6155 fix.go:56] duration metric: took 13.412208ms for fixHost
	I0904 13:24:32.442982    6155 start.go:83] releasing machines lock for "no-preload-393000", held for 13.426542ms
	W0904 13:24:32.442988    6155 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:24:32.443010    6155 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:32.443014    6155 start.go:729] Will try again in 5 seconds ...
	I0904 13:24:37.445155    6155 start.go:360] acquireMachinesLock for no-preload-393000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:24:39.214795    6155 start.go:364] duration metric: took 1.769508208s to acquireMachinesLock for "no-preload-393000"
	I0904 13:24:39.214867    6155 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:24:39.214884    6155 fix.go:54] fixHost starting: 
	I0904 13:24:39.215603    6155 fix.go:112] recreateIfNeeded on no-preload-393000: state=Stopped err=<nil>
	W0904 13:24:39.215634    6155 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:24:39.221304    6155 out.go:177] * Restarting existing qemu2 VM for "no-preload-393000" ...
	I0904 13:24:39.241130    6155 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:39.241377    6155 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:70:bb:62:fa:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/no-preload-393000/disk.qcow2
	I0904 13:24:39.250761    6155 main.go:141] libmachine: STDOUT: 
	I0904 13:24:39.250829    6155 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:39.250915    6155 fix.go:56] duration metric: took 36.031042ms for fixHost
	I0904 13:24:39.250935    6155 start.go:83] releasing machines lock for "no-preload-393000", held for 36.101958ms
	W0904 13:24:39.251123    6155 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-393000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-393000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:39.260104    6155 out.go:201] 
	W0904 13:24:39.264219    6155 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:24:39.264241    6155 out.go:270] * 
	* 
	W0904 13:24:39.265893    6155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:24:39.277175    6155 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-393000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000: exit status 7 (55.462458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-393000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-727000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-727000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.846920917s)

                                                
                                                
-- stdout --
	* [embed-certs-727000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-727000" primary control-plane node in "embed-certs-727000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-727000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:24:36.831456    6165 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:24:36.831578    6165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:36.831581    6165 out.go:358] Setting ErrFile to fd 2...
	I0904 13:24:36.831584    6165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:36.831702    6165 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:24:36.832748    6165 out.go:352] Setting JSON to false
	I0904 13:24:36.849258    6165 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5040,"bootTime":1725476436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:24:36.849326    6165 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:24:36.853624    6165 out.go:177] * [embed-certs-727000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:24:36.859649    6165 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:24:36.859701    6165 notify.go:220] Checking for updates...
	I0904 13:24:36.866517    6165 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:24:36.869557    6165 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:24:36.872592    6165 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:24:36.875593    6165 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:24:36.878558    6165 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:24:36.881858    6165 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:24:36.881932    6165 config.go:182] Loaded profile config "no-preload-393000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:24:36.881976    6165 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:24:36.885467    6165 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:24:36.897622    6165 start.go:297] selected driver: qemu2
	I0904 13:24:36.897630    6165 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:24:36.897637    6165 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:24:36.900035    6165 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:24:36.902543    6165 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:24:36.905605    6165 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:24:36.905637    6165 cni.go:84] Creating CNI manager for ""
	I0904 13:24:36.905644    6165 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:24:36.905648    6165 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 13:24:36.905683    6165 start.go:340] cluster config:
	{Name:embed-certs-727000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMn
etPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:24:36.909524    6165 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:36.918572    6165 out.go:177] * Starting "embed-certs-727000" primary control-plane node in "embed-certs-727000" cluster
	I0904 13:24:36.922581    6165 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:24:36.922598    6165 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:24:36.922609    6165 cache.go:56] Caching tarball of preloaded images
	I0904 13:24:36.922688    6165 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:24:36.922701    6165 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:24:36.922770    6165 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/embed-certs-727000/config.json ...
	I0904 13:24:36.922789    6165 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/embed-certs-727000/config.json: {Name:mke393176482a6228e281631f253c93164ebefe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:24:36.923252    6165 start.go:360] acquireMachinesLock for embed-certs-727000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:24:36.923288    6165 start.go:364] duration metric: took 29.916µs to acquireMachinesLock for "embed-certs-727000"
	I0904 13:24:36.923300    6165 start.go:93] Provisioning new machine with config: &{Name:embed-certs-727000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-7270
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:24:36.923333    6165 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:24:36.931602    6165 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0904 13:24:36.949724    6165 start.go:159] libmachine.API.Create for "embed-certs-727000" (driver="qemu2")
	I0904 13:24:36.949756    6165 client.go:168] LocalClient.Create starting
	I0904 13:24:36.949824    6165 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:24:36.949855    6165 main.go:141] libmachine: Decoding PEM data...
	I0904 13:24:36.949864    6165 main.go:141] libmachine: Parsing certificate...
	I0904 13:24:36.949901    6165 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:24:36.949925    6165 main.go:141] libmachine: Decoding PEM data...
	I0904 13:24:36.949934    6165 main.go:141] libmachine: Parsing certificate...
	I0904 13:24:36.950296    6165 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:24:37.113469    6165 main.go:141] libmachine: Creating SSH key...
	I0904 13:24:37.193573    6165 main.go:141] libmachine: Creating Disk image...
	I0904 13:24:37.193579    6165 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:24:37.193767    6165 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/disk.qcow2
	I0904 13:24:37.202783    6165 main.go:141] libmachine: STDOUT: 
	I0904 13:24:37.202803    6165 main.go:141] libmachine: STDERR: 
	I0904 13:24:37.202858    6165 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/disk.qcow2 +20000M
	I0904 13:24:37.210698    6165 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:24:37.210713    6165 main.go:141] libmachine: STDERR: 
	I0904 13:24:37.210724    6165 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/disk.qcow2
	I0904 13:24:37.210730    6165 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:24:37.210750    6165 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:37.210783    6165 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:74:50:da:b2:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/disk.qcow2
	I0904 13:24:37.212367    6165 main.go:141] libmachine: STDOUT: 
	I0904 13:24:37.212382    6165 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:37.212403    6165 client.go:171] duration metric: took 262.646833ms to LocalClient.Create
	I0904 13:24:39.214544    6165 start.go:128] duration metric: took 2.291230417s to createHost
	I0904 13:24:39.214595    6165 start.go:83] releasing machines lock for "embed-certs-727000", held for 2.291336542s
	W0904 13:24:39.214653    6165 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:39.236219    6165 out.go:177] * Deleting "embed-certs-727000" in qemu2 ...
	W0904 13:24:39.292006    6165 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:39.292039    6165 start.go:729] Will try again in 5 seconds ...
	I0904 13:24:44.294175    6165 start.go:360] acquireMachinesLock for embed-certs-727000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:24:44.294573    6165 start.go:364] duration metric: took 286.75µs to acquireMachinesLock for "embed-certs-727000"
	I0904 13:24:44.294705    6165 start.go:93] Provisioning new machine with config: &{Name:embed-certs-727000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-7270
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:24:44.295090    6165 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:24:44.304768    6165 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0904 13:24:44.347551    6165 start.go:159] libmachine.API.Create for "embed-certs-727000" (driver="qemu2")
	I0904 13:24:44.347602    6165 client.go:168] LocalClient.Create starting
	I0904 13:24:44.347774    6165 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:24:44.347848    6165 main.go:141] libmachine: Decoding PEM data...
	I0904 13:24:44.347870    6165 main.go:141] libmachine: Parsing certificate...
	I0904 13:24:44.347934    6165 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:24:44.347986    6165 main.go:141] libmachine: Decoding PEM data...
	I0904 13:24:44.348000    6165 main.go:141] libmachine: Parsing certificate...
	I0904 13:24:44.352221    6165 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:24:44.531482    6165 main.go:141] libmachine: Creating SSH key...
	I0904 13:24:44.583030    6165 main.go:141] libmachine: Creating Disk image...
	I0904 13:24:44.583036    6165 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:24:44.583247    6165 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/disk.qcow2
	I0904 13:24:44.592465    6165 main.go:141] libmachine: STDOUT: 
	I0904 13:24:44.592488    6165 main.go:141] libmachine: STDERR: 
	I0904 13:24:44.592540    6165 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/disk.qcow2 +20000M
	I0904 13:24:44.600559    6165 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:24:44.600579    6165 main.go:141] libmachine: STDERR: 
	I0904 13:24:44.600601    6165 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/disk.qcow2
	I0904 13:24:44.600605    6165 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:24:44.600621    6165 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:44.600654    6165 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:86:06:72:be:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/disk.qcow2
	I0904 13:24:44.602244    6165 main.go:141] libmachine: STDOUT: 
	I0904 13:24:44.602260    6165 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:44.602273    6165 client.go:171] duration metric: took 254.669875ms to LocalClient.Create
	I0904 13:24:46.604412    6165 start.go:128] duration metric: took 2.309333625s to createHost
	I0904 13:24:46.604467    6165 start.go:83] releasing machines lock for "embed-certs-727000", held for 2.309909167s
	W0904 13:24:46.604904    6165 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-727000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-727000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:46.614535    6165 out.go:201] 
	W0904 13:24:46.622636    6165 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:24:46.622689    6165 out.go:270] * 
	* 
	W0904 13:24:46.625320    6165 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:24:46.634521    6165 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-727000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000: exit status 7 (65.951542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-393000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000: exit status 7 (31.236834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-393000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-393000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-393000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-393000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.693208ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-393000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-393000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000: exit status 7 (29.329791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-393000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-393000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000: exit status 7 (29.255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-393000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-393000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-393000 --alsologtostderr -v=1: exit status 83 (48.345334ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-393000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-393000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:24:39.534916    6187 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:24:39.535107    6187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:39.535110    6187 out.go:358] Setting ErrFile to fd 2...
	I0904 13:24:39.535112    6187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:39.535235    6187 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:24:39.535448    6187 out.go:352] Setting JSON to false
	I0904 13:24:39.535455    6187 mustload.go:65] Loading cluster: no-preload-393000
	I0904 13:24:39.535658    6187 config.go:182] Loaded profile config "no-preload-393000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:24:39.540416    6187 out.go:177] * The control-plane node no-preload-393000 host is not running: state=Stopped
	I0904 13:24:39.550513    6187 out.go:177]   To start a cluster, run: "minikube start -p no-preload-393000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-393000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000: exit status 7 (29.603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-393000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000: exit status 7 (29.189917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-393000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-227000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-227000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.00423725s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-227000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-227000" primary control-plane node in "default-k8s-diff-port-227000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-227000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:24:39.971770    6211 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:24:39.971933    6211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:39.971937    6211 out.go:358] Setting ErrFile to fd 2...
	I0904 13:24:39.971940    6211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:39.972076    6211 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:24:39.973235    6211 out.go:352] Setting JSON to false
	I0904 13:24:39.989476    6211 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5043,"bootTime":1725476436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:24:39.989539    6211 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:24:39.993413    6211 out.go:177] * [default-k8s-diff-port-227000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:24:40.003397    6211 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:24:40.003437    6211 notify.go:220] Checking for updates...
	I0904 13:24:40.011311    6211 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:24:40.014358    6211 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:24:40.017308    6211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:24:40.020385    6211 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:24:40.023332    6211 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:24:40.026638    6211 config.go:182] Loaded profile config "embed-certs-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:24:40.026694    6211 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:24:40.026737    6211 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:24:40.031271    6211 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:24:40.037336    6211 start.go:297] selected driver: qemu2
	I0904 13:24:40.037343    6211 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:24:40.037351    6211 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:24:40.039750    6211 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 13:24:40.043332    6211 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:24:40.046360    6211 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:24:40.046393    6211 cni.go:84] Creating CNI manager for ""
	I0904 13:24:40.046400    6211 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:24:40.046407    6211 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 13:24:40.046436    6211 start.go:340] cluster config:
	{Name:default-k8s-diff-port-227000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_v
mnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:24:40.050333    6211 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:40.059326    6211 out.go:177] * Starting "default-k8s-diff-port-227000" primary control-plane node in "default-k8s-diff-port-227000" cluster
	I0904 13:24:40.063284    6211 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:24:40.063302    6211 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:24:40.063313    6211 cache.go:56] Caching tarball of preloaded images
	I0904 13:24:40.063393    6211 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:24:40.063399    6211 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:24:40.063459    6211 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/default-k8s-diff-port-227000/config.json ...
	I0904 13:24:40.063479    6211 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/default-k8s-diff-port-227000/config.json: {Name:mk6c592c741527458100ae98b54e6b26a75e2f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:24:40.063711    6211 start.go:360] acquireMachinesLock for default-k8s-diff-port-227000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:24:40.063750    6211 start.go:364] duration metric: took 30.75µs to acquireMachinesLock for "default-k8s-diff-port-227000"
	I0904 13:24:40.063763    6211 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:defau
lt-k8s-diff-port-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:24:40.063793    6211 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:24:40.071271    6211 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0904 13:24:40.089904    6211 start.go:159] libmachine.API.Create for "default-k8s-diff-port-227000" (driver="qemu2")
	I0904 13:24:40.089927    6211 client.go:168] LocalClient.Create starting
	I0904 13:24:40.089990    6211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:24:40.090021    6211 main.go:141] libmachine: Decoding PEM data...
	I0904 13:24:40.090031    6211 main.go:141] libmachine: Parsing certificate...
	I0904 13:24:40.090075    6211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:24:40.090102    6211 main.go:141] libmachine: Decoding PEM data...
	I0904 13:24:40.090112    6211 main.go:141] libmachine: Parsing certificate...
	I0904 13:24:40.090466    6211 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:24:40.253136    6211 main.go:141] libmachine: Creating SSH key...
	I0904 13:24:40.306969    6211 main.go:141] libmachine: Creating Disk image...
	I0904 13:24:40.306974    6211 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:24:40.307211    6211 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/disk.qcow2
	I0904 13:24:40.316366    6211 main.go:141] libmachine: STDOUT: 
	I0904 13:24:40.316385    6211 main.go:141] libmachine: STDERR: 
	I0904 13:24:40.316430    6211 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/disk.qcow2 +20000M
	I0904 13:24:40.324260    6211 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:24:40.324276    6211 main.go:141] libmachine: STDERR: 
	I0904 13:24:40.324294    6211 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/disk.qcow2
	I0904 13:24:40.324300    6211 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:24:40.324315    6211 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:40.324344    6211 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:7b:71:a0:55:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/disk.qcow2
	I0904 13:24:40.325913    6211 main.go:141] libmachine: STDOUT: 
	I0904 13:24:40.325927    6211 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:40.325945    6211 client.go:171] duration metric: took 236.016125ms to LocalClient.Create
	I0904 13:24:42.328083    6211 start.go:128] duration metric: took 2.264302666s to createHost
	I0904 13:24:42.328138    6211 start.go:83] releasing machines lock for "default-k8s-diff-port-227000", held for 2.2644165s
	W0904 13:24:42.328219    6211 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:42.335350    6211 out.go:177] * Deleting "default-k8s-diff-port-227000" in qemu2 ...
	W0904 13:24:42.374064    6211 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:42.374093    6211 start.go:729] Will try again in 5 seconds ...
	I0904 13:24:47.374359    6211 start.go:360] acquireMachinesLock for default-k8s-diff-port-227000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:24:47.374835    6211 start.go:364] duration metric: took 384.75µs to acquireMachinesLock for "default-k8s-diff-port-227000"
	I0904 13:24:47.375029    6211 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:defau
lt-k8s-diff-port-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:24:47.375360    6211 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:24:47.381116    6211 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0904 13:24:47.432025    6211 start.go:159] libmachine.API.Create for "default-k8s-diff-port-227000" (driver="qemu2")
	I0904 13:24:47.432075    6211 client.go:168] LocalClient.Create starting
	I0904 13:24:47.432196    6211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:24:47.432249    6211 main.go:141] libmachine: Decoding PEM data...
	I0904 13:24:47.432264    6211 main.go:141] libmachine: Parsing certificate...
	I0904 13:24:47.432330    6211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:24:47.432360    6211 main.go:141] libmachine: Decoding PEM data...
	I0904 13:24:47.432371    6211 main.go:141] libmachine: Parsing certificate...
	I0904 13:24:47.433164    6211 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:24:47.624807    6211 main.go:141] libmachine: Creating SSH key...
	I0904 13:24:47.883840    6211 main.go:141] libmachine: Creating Disk image...
	I0904 13:24:47.883852    6211 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:24:47.884292    6211 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/disk.qcow2
	I0904 13:24:47.893963    6211 main.go:141] libmachine: STDOUT: 
	I0904 13:24:47.893988    6211 main.go:141] libmachine: STDERR: 
	I0904 13:24:47.894058    6211 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/disk.qcow2 +20000M
	I0904 13:24:47.902099    6211 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:24:47.902116    6211 main.go:141] libmachine: STDERR: 
	I0904 13:24:47.902131    6211 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/disk.qcow2
	I0904 13:24:47.902136    6211 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:24:47.902147    6211 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:47.902184    6211 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:b8:a4:7d:41:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/disk.qcow2
	I0904 13:24:47.903777    6211 main.go:141] libmachine: STDOUT: 
	I0904 13:24:47.903793    6211 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:47.903806    6211 client.go:171] duration metric: took 471.733083ms to LocalClient.Create
	I0904 13:24:49.905962    6211 start.go:128] duration metric: took 2.530608625s to createHost
	I0904 13:24:49.906026    6211 start.go:83] releasing machines lock for "default-k8s-diff-port-227000", held for 2.531192167s
	W0904 13:24:49.906496    6211 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-227000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-227000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:49.916158    6211 out.go:201] 
	W0904 13:24:49.924296    6211 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:24:49.924330    6211 out.go:270] * 
	* 
	W0904 13:24:49.925753    6211 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:24:49.936070    6211 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-227000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000: exit status 7 (63.698042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-227000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-727000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-727000 create -f testdata/busybox.yaml: exit status 1 (30.705584ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-727000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-727000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000: exit status 7 (28.426125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-727000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000: exit status 7 (29.688458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-727000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-727000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-727000 describe deploy/metrics-server -n kube-system: exit status 1 (26.936916ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-727000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-727000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000: exit status 7 (29.060583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-227000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-227000 create -f testdata/busybox.yaml: exit status 1 (33.157916ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-227000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-227000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000: exit status 7 (31.09525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-227000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000: exit status 7 (29.581166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-227000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-727000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-727000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.219063958s)

                                                
                                                
-- stdout --
	* [embed-certs-727000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-727000" primary control-plane node in "embed-certs-727000" cluster
	* Restarting existing qemu2 VM for "embed-certs-727000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-727000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:24:50.138407    6277 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:24:50.138549    6277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:50.138553    6277 out.go:358] Setting ErrFile to fd 2...
	I0904 13:24:50.138555    6277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:50.138692    6277 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:24:50.139643    6277 out.go:352] Setting JSON to false
	I0904 13:24:50.156213    6277 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5054,"bootTime":1725476436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:24:50.156288    6277 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:24:50.161058    6277 out.go:177] * [embed-certs-727000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:24:50.169129    6277 notify.go:220] Checking for updates...
	I0904 13:24:50.173062    6277 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:24:50.181069    6277 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:24:50.189039    6277 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:24:50.197016    6277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:24:50.205099    6277 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:24:50.217079    6277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:24:50.221465    6277 config.go:182] Loaded profile config "embed-certs-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:24:50.221783    6277 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:24:50.226023    6277 out.go:177] * Using the qemu2 driver based on existing profile
	I0904 13:24:50.233052    6277 start.go:297] selected driver: qemu2
	I0904 13:24:50.233057    6277 start.go:901] validating driver "qemu2" against &{Name:embed-certs-727000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-727000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:24:50.233117    6277 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:24:50.235404    6277 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:24:50.235430    6277 cni.go:84] Creating CNI manager for ""
	I0904 13:24:50.235437    6277 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:24:50.235466    6277 start.go:340] cluster config:
	{Name:embed-certs-727000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:24:50.239023    6277 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:50.243066    6277 out.go:177] * Starting "embed-certs-727000" primary control-plane node in "embed-certs-727000" cluster
	I0904 13:24:50.251082    6277 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:24:50.251114    6277 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:24:50.251120    6277 cache.go:56] Caching tarball of preloaded images
	I0904 13:24:50.251194    6277 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:24:50.251200    6277 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:24:50.251252    6277 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/embed-certs-727000/config.json ...
	I0904 13:24:50.251669    6277 start.go:360] acquireMachinesLock for embed-certs-727000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:24:50.251703    6277 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "embed-certs-727000"
	I0904 13:24:50.251713    6277 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:24:50.251717    6277 fix.go:54] fixHost starting: 
	I0904 13:24:50.251857    6277 fix.go:112] recreateIfNeeded on embed-certs-727000: state=Stopped err=<nil>
	W0904 13:24:50.251868    6277 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:24:50.255943    6277 out.go:177] * Restarting existing qemu2 VM for "embed-certs-727000" ...
	I0904 13:24:50.264107    6277 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:50.264155    6277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:86:06:72:be:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/disk.qcow2
	I0904 13:24:50.266006    6277 main.go:141] libmachine: STDOUT: 
	I0904 13:24:50.266025    6277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:50.266053    6277 fix.go:56] duration metric: took 14.335833ms for fixHost
	I0904 13:24:50.266057    6277 start.go:83] releasing machines lock for "embed-certs-727000", held for 14.350792ms
	W0904 13:24:50.266064    6277 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:24:50.266107    6277 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:50.266112    6277 start.go:729] Will try again in 5 seconds ...
	I0904 13:24:55.268216    6277 start.go:360] acquireMachinesLock for embed-certs-727000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:24:55.268736    6277 start.go:364] duration metric: took 367.542µs to acquireMachinesLock for "embed-certs-727000"
	I0904 13:24:55.268878    6277 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:24:55.268899    6277 fix.go:54] fixHost starting: 
	I0904 13:24:55.269619    6277 fix.go:112] recreateIfNeeded on embed-certs-727000: state=Stopped err=<nil>
	W0904 13:24:55.269644    6277 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:24:55.274081    6277 out.go:177] * Restarting existing qemu2 VM for "embed-certs-727000" ...
	I0904 13:24:55.280959    6277 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:55.281232    6277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:86:06:72:be:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/embed-certs-727000/disk.qcow2
	I0904 13:24:55.290395    6277 main.go:141] libmachine: STDOUT: 
	I0904 13:24:55.290466    6277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:55.290555    6277 fix.go:56] duration metric: took 21.652584ms for fixHost
	I0904 13:24:55.290573    6277 start.go:83] releasing machines lock for "embed-certs-727000", held for 21.813541ms
	W0904 13:24:55.290740    6277 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-727000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-727000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:55.299112    6277 out.go:201] 
	W0904 13:24:55.303095    6277 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:24:55.303120    6277 out.go:270] * 
	* 
	W0904 13:24:55.305770    6277 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:24:55.313094    6277 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-727000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000: exit status 7 (67.117375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-227000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-227000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-227000 describe deploy/metrics-server -n kube-system: exit status 1 (28.694458ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-227000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-227000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000: exit status 7 (33.107125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-227000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-227000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-227000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.815647416s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-227000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-227000" primary control-plane node in "default-k8s-diff-port-227000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-227000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-227000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:24:52.529961    6305 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:24:52.530096    6305 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:52.530104    6305 out.go:358] Setting ErrFile to fd 2...
	I0904 13:24:52.530107    6305 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:52.530226    6305 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:24:52.531220    6305 out.go:352] Setting JSON to false
	I0904 13:24:52.547209    6305 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5056,"bootTime":1725476436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:24:52.547273    6305 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:24:52.552276    6305 out.go:177] * [default-k8s-diff-port-227000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:24:52.560809    6305 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:24:52.560842    6305 notify.go:220] Checking for updates...
	I0904 13:24:52.568299    6305 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:24:52.571227    6305 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:24:52.574266    6305 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:24:52.577330    6305 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:24:52.580266    6305 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:24:52.583575    6305 config.go:182] Loaded profile config "default-k8s-diff-port-227000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:24:52.583831    6305 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:24:52.588264    6305 out.go:177] * Using the qemu2 driver based on existing profile
	I0904 13:24:52.595269    6305 start.go:297] selected driver: qemu2
	I0904 13:24:52.595276    6305 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-
k8s-diff-port-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:24:52.595350    6305 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:24:52.597705    6305 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 13:24:52.597746    6305 cni.go:84] Creating CNI manager for ""
	I0904 13:24:52.597753    6305 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:24:52.597783    6305 start.go:340] cluster config:
	{Name:default-k8s-diff-port-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:24:52.601336    6305 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:52.610237    6305 out.go:177] * Starting "default-k8s-diff-port-227000" primary control-plane node in "default-k8s-diff-port-227000" cluster
	I0904 13:24:52.614227    6305 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:24:52.614244    6305 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:24:52.614253    6305 cache.go:56] Caching tarball of preloaded images
	I0904 13:24:52.614302    6305 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:24:52.614307    6305 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:24:52.614368    6305 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/default-k8s-diff-port-227000/config.json ...
	I0904 13:24:52.614889    6305 start.go:360] acquireMachinesLock for default-k8s-diff-port-227000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:24:52.614919    6305 start.go:364] duration metric: took 24.167µs to acquireMachinesLock for "default-k8s-diff-port-227000"
	I0904 13:24:52.614930    6305 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:24:52.614936    6305 fix.go:54] fixHost starting: 
	I0904 13:24:52.615066    6305 fix.go:112] recreateIfNeeded on default-k8s-diff-port-227000: state=Stopped err=<nil>
	W0904 13:24:52.615075    6305 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:24:52.619324    6305 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-227000" ...
	I0904 13:24:52.627150    6305 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:52.627193    6305 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:b8:a4:7d:41:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/disk.qcow2
	I0904 13:24:52.629380    6305 main.go:141] libmachine: STDOUT: 
	I0904 13:24:52.629400    6305 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:52.629423    6305 fix.go:56] duration metric: took 14.488125ms for fixHost
	I0904 13:24:52.629426    6305 start.go:83] releasing machines lock for "default-k8s-diff-port-227000", held for 14.50225ms
	W0904 13:24:52.629434    6305 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:24:52.629467    6305 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:52.629472    6305 start.go:729] Will try again in 5 seconds ...
	I0904 13:24:57.631628    6305 start.go:360] acquireMachinesLock for default-k8s-diff-port-227000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:24:58.243147    6305 start.go:364] duration metric: took 611.381208ms to acquireMachinesLock for "default-k8s-diff-port-227000"
	I0904 13:24:58.243245    6305 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:24:58.243269    6305 fix.go:54] fixHost starting: 
	I0904 13:24:58.244029    6305 fix.go:112] recreateIfNeeded on default-k8s-diff-port-227000: state=Stopped err=<nil>
	W0904 13:24:58.244059    6305 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:24:58.253539    6305 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-227000" ...
	I0904 13:24:58.268609    6305 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:58.268859    6305 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:b8:a4:7d:41:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/default-k8s-diff-port-227000/disk.qcow2
	I0904 13:24:58.278911    6305 main.go:141] libmachine: STDOUT: 
	I0904 13:24:58.278995    6305 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:58.279083    6305 fix.go:56] duration metric: took 35.81925ms for fixHost
	I0904 13:24:58.279107    6305 start.go:83] releasing machines lock for "default-k8s-diff-port-227000", held for 35.905375ms
	W0904 13:24:58.279364    6305 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-227000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-227000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:58.284507    6305 out.go:201] 
	W0904 13:24:58.291594    6305 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:24:58.291621    6305 out.go:270] * 
	* 
	W0904 13:24:58.293844    6305 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:24:58.303511    6305 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-227000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000: exit status 7 (61.216042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-227000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-727000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000: exit status 7 (32.202333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-727000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-727000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-727000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.779167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-727000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-727000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000: exit status 7 (29.989209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-727000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000: exit status 7 (29.551333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-727000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-727000 --alsologtostderr -v=1: exit status 83 (40.055916ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-727000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-727000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:24:55.581714    6324 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:24:55.581877    6324 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:55.581880    6324 out.go:358] Setting ErrFile to fd 2...
	I0904 13:24:55.581883    6324 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:55.582021    6324 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:24:55.582234    6324 out.go:352] Setting JSON to false
	I0904 13:24:55.582241    6324 mustload.go:65] Loading cluster: embed-certs-727000
	I0904 13:24:55.582422    6324 config.go:182] Loaded profile config "embed-certs-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:24:55.585757    6324 out.go:177] * The control-plane node embed-certs-727000 host is not running: state=Stopped
	I0904 13:24:55.589529    6324 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-727000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-727000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000: exit status 7 (28.499542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-727000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000: exit status 7 (29.237542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-509000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-509000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.867460333s)

                                                
                                                
-- stdout --
	* [newest-cni-509000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-509000" primary control-plane node in "newest-cni-509000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-509000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:24:55.891649    6342 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:24:55.891778    6342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:55.891782    6342 out.go:358] Setting ErrFile to fd 2...
	I0904 13:24:55.891784    6342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:55.891898    6342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:24:55.892939    6342 out.go:352] Setting JSON to false
	I0904 13:24:55.908734    6342 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5059,"bootTime":1725476436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:24:55.908807    6342 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:24:55.913673    6342 out.go:177] * [newest-cni-509000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:24:55.920681    6342 notify.go:220] Checking for updates...
	I0904 13:24:55.930579    6342 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:24:55.938525    6342 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:24:55.946677    6342 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:24:55.949651    6342 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:24:55.952628    6342 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:24:55.956638    6342 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:24:55.959862    6342 config.go:182] Loaded profile config "default-k8s-diff-port-227000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:24:55.959944    6342 config.go:182] Loaded profile config "multinode-452000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:24:55.960001    6342 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:24:55.964621    6342 out.go:177] * Using the qemu2 driver based on user configuration
	I0904 13:24:55.970589    6342 start.go:297] selected driver: qemu2
	I0904 13:24:55.970595    6342 start.go:901] validating driver "qemu2" against <nil>
	I0904 13:24:55.970600    6342 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:24:55.973147    6342 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0904 13:24:55.973174    6342 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0904 13:24:55.977674    6342 out.go:177] * Automatically selected the socket_vmnet network
	I0904 13:24:55.984649    6342 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0904 13:24:55.984667    6342 cni.go:84] Creating CNI manager for ""
	I0904 13:24:55.984675    6342 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:24:55.984679    6342 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 13:24:55.984703    6342 start.go:340] cluster config:
	{Name:newest-cni-509000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-509000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:24:55.988587    6342 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:24:55.996573    6342 out.go:177] * Starting "newest-cni-509000" primary control-plane node in "newest-cni-509000" cluster
	I0904 13:24:56.001717    6342 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:24:56.001744    6342 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:24:56.001755    6342 cache.go:56] Caching tarball of preloaded images
	I0904 13:24:56.001847    6342 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:24:56.001854    6342 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:24:56.001930    6342 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/newest-cni-509000/config.json ...
	I0904 13:24:56.001944    6342 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/newest-cni-509000/config.json: {Name:mk46167a99f837b70cc4b0bbc4c71ac8e292312f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 13:24:56.002176    6342 start.go:360] acquireMachinesLock for newest-cni-509000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:24:56.002215    6342 start.go:364] duration metric: took 31.708µs to acquireMachinesLock for "newest-cni-509000"
	I0904 13:24:56.002228    6342 start.go:93] Provisioning new machine with config: &{Name:newest-cni-509000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-50900
0 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:24:56.002267    6342 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:24:56.011656    6342 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0904 13:24:56.031919    6342 start.go:159] libmachine.API.Create for "newest-cni-509000" (driver="qemu2")
	I0904 13:24:56.031964    6342 client.go:168] LocalClient.Create starting
	I0904 13:24:56.032049    6342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:24:56.032084    6342 main.go:141] libmachine: Decoding PEM data...
	I0904 13:24:56.032093    6342 main.go:141] libmachine: Parsing certificate...
	I0904 13:24:56.032142    6342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:24:56.032168    6342 main.go:141] libmachine: Decoding PEM data...
	I0904 13:24:56.032178    6342 main.go:141] libmachine: Parsing certificate...
	I0904 13:24:56.032569    6342 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:24:56.188615    6342 main.go:141] libmachine: Creating SSH key...
	I0904 13:24:56.221676    6342 main.go:141] libmachine: Creating Disk image...
	I0904 13:24:56.221681    6342 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:24:56.221916    6342 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/disk.qcow2
	I0904 13:24:56.231035    6342 main.go:141] libmachine: STDOUT: 
	I0904 13:24:56.231054    6342 main.go:141] libmachine: STDERR: 
	I0904 13:24:56.231111    6342 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/disk.qcow2 +20000M
	I0904 13:24:56.239024    6342 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:24:56.239040    6342 main.go:141] libmachine: STDERR: 
	I0904 13:24:56.239056    6342 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/disk.qcow2
	I0904 13:24:56.239062    6342 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:24:56.239077    6342 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:24:56.239113    6342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:96:92:be:e3:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/disk.qcow2
	I0904 13:24:56.240663    6342 main.go:141] libmachine: STDOUT: 
	I0904 13:24:56.240680    6342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:24:56.240703    6342 client.go:171] duration metric: took 208.735959ms to LocalClient.Create
	I0904 13:24:58.242939    6342 start.go:128] duration metric: took 2.240638458s to createHost
	I0904 13:24:58.242993    6342 start.go:83] releasing machines lock for "newest-cni-509000", held for 2.240805708s
	W0904 13:24:58.243048    6342 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:58.265548    6342 out.go:177] * Deleting "newest-cni-509000" in qemu2 ...
	W0904 13:24:58.321362    6342 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:24:58.321387    6342 start.go:729] Will try again in 5 seconds ...
	I0904 13:25:03.323506    6342 start.go:360] acquireMachinesLock for newest-cni-509000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:25:03.324104    6342 start.go:364] duration metric: took 499.709µs to acquireMachinesLock for "newest-cni-509000"
	I0904 13:25:03.324295    6342 start.go:93] Provisioning new machine with config: &{Name:newest-cni-509000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-50900
0 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 13:25:03.324534    6342 start.go:125] createHost starting for "" (driver="qemu2")
	I0904 13:25:03.330105    6342 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0904 13:25:03.379531    6342 start.go:159] libmachine.API.Create for "newest-cni-509000" (driver="qemu2")
	I0904 13:25:03.379581    6342 client.go:168] LocalClient.Create starting
	I0904 13:25:03.379716    6342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/ca.pem
	I0904 13:25:03.379784    6342 main.go:141] libmachine: Decoding PEM data...
	I0904 13:25:03.379803    6342 main.go:141] libmachine: Parsing certificate...
	I0904 13:25:03.379870    6342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19575-1140/.minikube/certs/cert.pem
	I0904 13:25:03.379917    6342 main.go:141] libmachine: Decoding PEM data...
	I0904 13:25:03.379930    6342 main.go:141] libmachine: Parsing certificate...
	I0904 13:25:03.380621    6342 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0904 13:25:03.550774    6342 main.go:141] libmachine: Creating SSH key...
	I0904 13:25:03.664748    6342 main.go:141] libmachine: Creating Disk image...
	I0904 13:25:03.664753    6342 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0904 13:25:03.664972    6342 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/disk.qcow2.raw /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/disk.qcow2
	I0904 13:25:03.674550    6342 main.go:141] libmachine: STDOUT: 
	I0904 13:25:03.674565    6342 main.go:141] libmachine: STDERR: 
	I0904 13:25:03.674628    6342 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/disk.qcow2 +20000M
	I0904 13:25:03.682492    6342 main.go:141] libmachine: STDOUT: Image resized.
	
	I0904 13:25:03.682518    6342 main.go:141] libmachine: STDERR: 
	I0904 13:25:03.682530    6342 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/disk.qcow2
	I0904 13:25:03.682535    6342 main.go:141] libmachine: Starting QEMU VM...
	I0904 13:25:03.682545    6342 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:25:03.682575    6342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:d5:c2:e0:ee:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/disk.qcow2
	I0904 13:25:03.684273    6342 main.go:141] libmachine: STDOUT: 
	I0904 13:25:03.684296    6342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:25:03.684308    6342 client.go:171] duration metric: took 304.725ms to LocalClient.Create
	I0904 13:25:05.686521    6342 start.go:128] duration metric: took 2.361995417s to createHost
	I0904 13:25:05.686579    6342 start.go:83] releasing machines lock for "newest-cni-509000", held for 2.362491s
	W0904 13:25:05.686917    6342 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-509000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-509000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:25:05.700559    6342 out.go:201] 
	W0904 13:25:05.703829    6342 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:25:05.703887    6342 out.go:270] * 
	* 
	W0904 13:25:05.706546    6342 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:25:05.719577    6342 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-509000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-509000 -n newest-cni-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-509000 -n newest-cni-509000: exit status 7 (70.38325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-509000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-227000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000: exit status 7 (30.607333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-227000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-227000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-227000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-227000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.445ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-227000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-227000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000: exit status 7 (29.9445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-227000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-227000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000: exit status 7 (28.738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-227000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-227000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-227000 --alsologtostderr -v=1: exit status 83 (43.690333ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-227000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-227000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:24:58.566575    6365 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:24:58.566710    6365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:58.566717    6365 out.go:358] Setting ErrFile to fd 2...
	I0904 13:24:58.566720    6365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:24:58.566838    6365 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:24:58.567052    6365 out.go:352] Setting JSON to false
	I0904 13:24:58.567060    6365 mustload.go:65] Loading cluster: default-k8s-diff-port-227000
	I0904 13:24:58.567260    6365 config.go:182] Loaded profile config "default-k8s-diff-port-227000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:24:58.571844    6365 out.go:177] * The control-plane node default-k8s-diff-port-227000 host is not running: state=Stopped
	I0904 13:24:58.579978    6365 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-227000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-227000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000: exit status 7 (28.713209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-227000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000: exit status 7 (29.458916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-227000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-509000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-509000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.1813525s)

                                                
                                                
-- stdout --
	* [newest-cni-509000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-509000" primary control-plane node in "newest-cni-509000" cluster
	* Restarting existing qemu2 VM for "newest-cni-509000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-509000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:25:09.398681    6412 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:25:09.398801    6412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:25:09.398804    6412 out.go:358] Setting ErrFile to fd 2...
	I0904 13:25:09.398806    6412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:25:09.398907    6412 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:25:09.399995    6412 out.go:352] Setting JSON to false
	I0904 13:25:09.416166    6412 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5073,"bootTime":1725476436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 13:25:09.416250    6412 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 13:25:09.419805    6412 out.go:177] * [newest-cni-509000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 13:25:09.426914    6412 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 13:25:09.426955    6412 notify.go:220] Checking for updates...
	I0904 13:25:09.433874    6412 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 13:25:09.436821    6412 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 13:25:09.439728    6412 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 13:25:09.442841    6412 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 13:25:09.445828    6412 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 13:25:09.447600    6412 config.go:182] Loaded profile config "newest-cni-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:25:09.447851    6412 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 13:25:09.451905    6412 out.go:177] * Using the qemu2 driver based on existing profile
	I0904 13:25:09.458663    6412 start.go:297] selected driver: qemu2
	I0904 13:25:09.458672    6412 start.go:901] validating driver "qemu2" against &{Name:newest-cni-509000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-509000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:25:09.458735    6412 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 13:25:09.460868    6412 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0904 13:25:09.460893    6412 cni.go:84] Creating CNI manager for ""
	I0904 13:25:09.460899    6412 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 13:25:09.460920    6412 start.go:340] cluster config:
	{Name:newest-cni-509000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-509000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 13:25:09.464224    6412 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 13:25:09.472910    6412 out.go:177] * Starting "newest-cni-509000" primary control-plane node in "newest-cni-509000" cluster
	I0904 13:25:09.476792    6412 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 13:25:09.476805    6412 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 13:25:09.476811    6412 cache.go:56] Caching tarball of preloaded images
	I0904 13:25:09.476861    6412 preload.go:172] Found /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0904 13:25:09.476866    6412 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 13:25:09.476913    6412 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/newest-cni-509000/config.json ...
	I0904 13:25:09.477424    6412 start.go:360] acquireMachinesLock for newest-cni-509000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:25:09.477451    6412 start.go:364] duration metric: took 21.25µs to acquireMachinesLock for "newest-cni-509000"
	I0904 13:25:09.477460    6412 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:25:09.477464    6412 fix.go:54] fixHost starting: 
	I0904 13:25:09.477587    6412 fix.go:112] recreateIfNeeded on newest-cni-509000: state=Stopped err=<nil>
	W0904 13:25:09.477596    6412 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:25:09.485776    6412 out.go:177] * Restarting existing qemu2 VM for "newest-cni-509000" ...
	I0904 13:25:09.489834    6412 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:25:09.489891    6412 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:d5:c2:e0:ee:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/disk.qcow2
	I0904 13:25:09.491828    6412 main.go:141] libmachine: STDOUT: 
	I0904 13:25:09.491848    6412 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:25:09.491872    6412 fix.go:56] duration metric: took 14.408583ms for fixHost
	I0904 13:25:09.491876    6412 start.go:83] releasing machines lock for "newest-cni-509000", held for 14.421209ms
	W0904 13:25:09.491883    6412 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:25:09.491919    6412 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:25:09.491923    6412 start.go:729] Will try again in 5 seconds ...
	I0904 13:25:14.493995    6412 start.go:360] acquireMachinesLock for newest-cni-509000: {Name:mkffea5767ac0f54c70242857234a501d8b5b2f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 13:25:14.494431    6412 start.go:364] duration metric: took 329.834µs to acquireMachinesLock for "newest-cni-509000"
	I0904 13:25:14.494613    6412 start.go:96] Skipping create...Using existing machine configuration
	I0904 13:25:14.494638    6412 fix.go:54] fixHost starting: 
	I0904 13:25:14.495379    6412 fix.go:112] recreateIfNeeded on newest-cni-509000: state=Stopped err=<nil>
	W0904 13:25:14.495405    6412 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 13:25:14.503834    6412 out.go:177] * Restarting existing qemu2 VM for "newest-cni-509000" ...
	I0904 13:25:14.508812    6412 qemu.go:418] Using hvf for hardware acceleration
	I0904 13:25:14.509072    6412 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:d5:c2:e0:ee:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19575-1140/.minikube/machines/newest-cni-509000/disk.qcow2
	I0904 13:25:14.517783    6412 main.go:141] libmachine: STDOUT: 
	I0904 13:25:14.517847    6412 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0904 13:25:14.517938    6412 fix.go:56] duration metric: took 23.305958ms for fixHost
	I0904 13:25:14.517955    6412 start.go:83] releasing machines lock for "newest-cni-509000", held for 23.474333ms
	W0904 13:25:14.518128    6412 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-509000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-509000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0904 13:25:14.524686    6412 out.go:201] 
	W0904 13:25:14.528794    6412 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0904 13:25:14.528814    6412 out.go:270] * 
	* 
	W0904 13:25:14.530734    6412 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 13:25:14.538779    6412 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-509000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-509000 -n newest-cni-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-509000 -n newest-cni-509000: exit status 7 (67.220209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-509000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-509000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-509000 -n newest-cni-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-509000 -n newest-cni-509000: exit status 7 (29.94025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-509000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-509000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-509000 --alsologtostderr -v=1: exit status 83 (42.765209ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-509000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-509000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 13:25:14.723577    6426 out.go:345] Setting OutFile to fd 1 ...
	I0904 13:25:14.723740    6426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:25:14.723744    6426 out.go:358] Setting ErrFile to fd 2...
	I0904 13:25:14.723746    6426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 13:25:14.723875    6426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 13:25:14.724091    6426 out.go:352] Setting JSON to false
	I0904 13:25:14.724098    6426 mustload.go:65] Loading cluster: newest-cni-509000
	I0904 13:25:14.724287    6426 config.go:182] Loaded profile config "newest-cni-509000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 13:25:14.728898    6426 out.go:177] * The control-plane node newest-cni-509000 host is not running: state=Stopped
	I0904 13:25:14.732824    6426 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-509000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-509000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-509000 -n newest-cni-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-509000 -n newest-cni-509000: exit status 7 (30.667416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-509000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-509000 -n newest-cni-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-509000 -n newest-cni-509000: exit status 7 (30.284416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-509000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (154/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.0/json-events 8.12
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.11
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.33
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 200.74
29 TestAddons/serial/Volcano 38.38
31 TestAddons/serial/GCPAuth/Namespaces 0.09
34 TestAddons/parallel/Ingress 18.5
35 TestAddons/parallel/InspektorGadget 10.38
36 TestAddons/parallel/MetricsServer 5.3
39 TestAddons/parallel/CSI 56.47
40 TestAddons/parallel/Headlamp 16.62
41 TestAddons/parallel/CloudSpanner 5.21
42 TestAddons/parallel/LocalPath 53.95
43 TestAddons/parallel/NvidiaDevicePlugin 6.19
44 TestAddons/parallel/Yakd 10.28
45 TestAddons/StoppedEnableDisable 12.41
53 TestHyperKitDriverInstallOrUpdate 9.94
56 TestErrorSpam/setup 34.91
57 TestErrorSpam/start 0.35
58 TestErrorSpam/status 0.25
59 TestErrorSpam/pause 0.69
60 TestErrorSpam/unpause 0.62
61 TestErrorSpam/stop 64.31
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 77.09
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 37.4
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.04
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.72
73 TestFunctional/serial/CacheCmd/cache/add_local 1.17
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.64
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.82
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.01
81 TestFunctional/serial/ExtraConfig 38.37
82 TestFunctional/serial/ComponentHealth 0.04
83 TestFunctional/serial/LogsCmd 0.64
84 TestFunctional/serial/LogsFileCmd 0.63
85 TestFunctional/serial/InvalidService 4.16
87 TestFunctional/parallel/ConfigCmd 0.23
88 TestFunctional/parallel/DashboardCmd 9.55
89 TestFunctional/parallel/DryRun 0.23
90 TestFunctional/parallel/InternationalLanguage 0.12
91 TestFunctional/parallel/StatusCmd 0.25
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 25.94
99 TestFunctional/parallel/SSHCmd 0.13
100 TestFunctional/parallel/CpCmd 0.44
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.4
107 TestFunctional/parallel/NodeLabels 0.04
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.1
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
124 TestFunctional/parallel/ServiceCmd/List 0.31
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
127 TestFunctional/parallel/ServiceCmd/Format 0.1
128 TestFunctional/parallel/ServiceCmd/URL 0.1
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
130 TestFunctional/parallel/ProfileCmd/profile_list 0.12
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
132 TestFunctional/parallel/MountCmd/any-port 6.22
133 TestFunctional/parallel/MountCmd/specific-port 0.91
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.01
135 TestFunctional/parallel/Version/short 0.03
136 TestFunctional/parallel/Version/components 0.15
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.14
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
141 TestFunctional/parallel/ImageCommands/ImageBuild 1.75
142 TestFunctional/parallel/ImageCommands/Setup 1.74
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.49
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.14
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.3
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.24
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.18
150 TestFunctional/parallel/DockerEnv/bash 0.4
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 181.52
161 TestMultiControlPlane/serial/DeployApp 4.45
162 TestMultiControlPlane/serial/PingHostFromPods 0.72
163 TestMultiControlPlane/serial/AddWorkerNode 55.07
164 TestMultiControlPlane/serial/NodeLabels 0.16
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.24
166 TestMultiControlPlane/serial/CopyFile 4.11
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 78.87
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 1.89
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.21
212 TestMainNoArgs 0.03
259 TestStoppedBinaryUpgrade/Setup 1.17
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
276 TestNoKubernetes/serial/ProfileList 31.37
277 TestNoKubernetes/serial/Stop 1.85
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.74
294 TestStartStop/group/old-k8s-version/serial/Stop 3.31
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
305 TestStartStop/group/no-preload/serial/Stop 3.31
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.11
318 TestStartStop/group/embed-certs/serial/Stop 3.08
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.11
323 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.1
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
338 TestStartStop/group/newest-cni/serial/Stop 3.38
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-210000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-210000: exit status 85 (94.789ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-210000 | jenkins | v1.34.0 | 04 Sep 24 12:24 PDT |          |
	|         | -p download-only-210000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/04 12:24:43
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 12:24:43.049256    1663 out.go:345] Setting OutFile to fd 1 ...
	I0904 12:24:43.049408    1663 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:24:43.049411    1663 out.go:358] Setting ErrFile to fd 2...
	I0904 12:24:43.049414    1663 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:24:43.049537    1663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	W0904 12:24:43.049620    1663 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19575-1140/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19575-1140/.minikube/config/config.json: no such file or directory
	I0904 12:24:43.050925    1663 out.go:352] Setting JSON to true
	I0904 12:24:43.068724    1663 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1447,"bootTime":1725476436,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 12:24:43.068801    1663 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 12:24:43.077918    1663 out.go:97] [download-only-210000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 12:24:43.078071    1663 notify.go:220] Checking for updates...
	W0904 12:24:43.078110    1663 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball: no such file or directory
	I0904 12:24:43.079153    1663 out.go:169] MINIKUBE_LOCATION=19575
	I0904 12:24:43.081788    1663 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 12:24:43.087890    1663 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 12:24:43.090872    1663 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 12:24:43.093842    1663 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	W0904 12:24:43.099815    1663 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 12:24:43.100062    1663 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 12:24:43.104897    1663 out.go:97] Using the qemu2 driver based on user configuration
	I0904 12:24:43.104918    1663 start.go:297] selected driver: qemu2
	I0904 12:24:43.104934    1663 start.go:901] validating driver "qemu2" against <nil>
	I0904 12:24:43.105000    1663 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 12:24:43.107789    1663 out.go:169] Automatically selected the socket_vmnet network
	I0904 12:24:43.113495    1663 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0904 12:24:43.113589    1663 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 12:24:43.113668    1663 cni.go:84] Creating CNI manager for ""
	I0904 12:24:43.113685    1663 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0904 12:24:43.113739    1663 start.go:340] cluster config:
	{Name:download-only-210000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-210000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0904 12:24:43.118959    1663 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 12:24:43.123821    1663 out.go:97] Downloading VM boot image ...
	I0904 12:24:43.123836    1663 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso
	I0904 12:24:49.582977    1663 out.go:97] Starting "download-only-210000" primary control-plane node in "download-only-210000" cluster
	I0904 12:24:49.583007    1663 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0904 12:24:49.644887    1663 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0904 12:24:49.644896    1663 cache.go:56] Caching tarball of preloaded images
	I0904 12:24:49.645056    1663 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0904 12:24:49.650171    1663 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0904 12:24:49.650178    1663 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0904 12:24:49.738576    1663 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0904 12:24:56.446254    1663 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0904 12:24:56.446421    1663 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0904 12:24:57.142867    1663 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0904 12:24:57.143062    1663 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/download-only-210000/config.json ...
	I0904 12:24:57.143092    1663 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/download-only-210000/config.json: {Name:mk4ad6959e28f3b32d62b1914cf69975ab372a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 12:24:57.143319    1663 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0904 12:24:57.143499    1663 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0904 12:24:57.658995    1663 out.go:193] 
	W0904 12:24:57.665940    1663 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19575-1140/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108cc7960 0x108cc7960 0x108cc7960 0x108cc7960 0x108cc7960 0x108cc7960 0x108cc7960] Decompressors:map[bz2:0x14000637d00 gz:0x14000637d08 tar:0x14000637c90 tar.bz2:0x14000637ca0 tar.gz:0x14000637cb0 tar.xz:0x14000637cc0 tar.zst:0x14000637cd0 tbz2:0x14000637ca0 tgz:0x14000637cb0 txz:0x14000637cc0 tzst:0x14000637cd0 xz:0x14000637d10 zip:0x14000637d20 zst:0x14000637d18] Getters:map[file:0x1400142e550 http:0x140004e62d0 https:0x140004e6320] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0904 12:24:57.665968    1663 out_reason.go:110] 
	W0904 12:24:57.676901    1663 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 12:24:57.680826    1663 out.go:193] 
	
	
	* The control-plane node download-only-210000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-210000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-210000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (8.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-744000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-744000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (8.115446958s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (8.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-744000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-744000: exit status 85 (71.694042ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-210000 | jenkins | v1.34.0 | 04 Sep 24 12:24 PDT |                     |
	|         | -p download-only-210000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 04 Sep 24 12:24 PDT | 04 Sep 24 12:24 PDT |
	| delete  | -p download-only-210000        | download-only-210000 | jenkins | v1.34.0 | 04 Sep 24 12:24 PDT | 04 Sep 24 12:24 PDT |
	| start   | -o=json --download-only        | download-only-744000 | jenkins | v1.34.0 | 04 Sep 24 12:24 PDT |                     |
	|         | -p download-only-744000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/04 12:24:58
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 12:24:58.096399    1690 out.go:345] Setting OutFile to fd 1 ...
	I0904 12:24:58.096516    1690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:24:58.096519    1690 out.go:358] Setting ErrFile to fd 2...
	I0904 12:24:58.096522    1690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:24:58.096638    1690 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 12:24:58.097730    1690 out.go:352] Setting JSON to true
	I0904 12:24:58.113888    1690 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1462,"bootTime":1725476436,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 12:24:58.113950    1690 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 12:24:58.119024    1690 out.go:97] [download-only-744000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 12:24:58.119132    1690 notify.go:220] Checking for updates...
	I0904 12:24:58.121940    1690 out.go:169] MINIKUBE_LOCATION=19575
	I0904 12:24:58.125008    1690 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 12:24:58.129004    1690 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 12:24:58.131956    1690 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 12:24:58.135007    1690 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	W0904 12:24:58.139434    1690 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 12:24:58.139613    1690 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 12:24:58.142917    1690 out.go:97] Using the qemu2 driver based on user configuration
	I0904 12:24:58.142929    1690 start.go:297] selected driver: qemu2
	I0904 12:24:58.142933    1690 start.go:901] validating driver "qemu2" against <nil>
	I0904 12:24:58.143000    1690 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 12:24:58.145921    1690 out.go:169] Automatically selected the socket_vmnet network
	I0904 12:24:58.151244    1690 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0904 12:24:58.151338    1690 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 12:24:58.151370    1690 cni.go:84] Creating CNI manager for ""
	I0904 12:24:58.151379    1690 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 12:24:58.151387    1690 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 12:24:58.151422    1690 start.go:340] cluster config:
	{Name:download-only-744000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-744000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPU
s: AutoPauseInterval:1m0s}
	I0904 12:24:58.154928    1690 iso.go:125] acquiring lock: {Name:mkebc4172c19bd1bff0f54edbc3322d94476263f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 12:24:58.157975    1690 out.go:97] Starting "download-only-744000" primary control-plane node in "download-only-744000" cluster
	I0904 12:24:58.157986    1690 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 12:24:58.225382    1690 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0904 12:24:58.225411    1690 cache.go:56] Caching tarball of preloaded images
	I0904 12:24:58.225590    1690 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 12:24:58.230832    1690 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0904 12:24:58.230840    1690 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0904 12:24:58.325041    1690 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19575-1140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-744000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-744000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-744000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.33s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-359000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-359000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-359000
--- PASS: TestBinaryMirror (0.33s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-970000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-970000: exit status 85 (60.115666ms)

                                                
                                                
-- stdout --
	* Profile "addons-970000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-970000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-970000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-970000: exit status 85 (64.089959ms)

                                                
                                                
-- stdout --
	* Profile "addons-970000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-970000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (200.74s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-970000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-970000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m20.744562s)
--- PASS: TestAddons/Setup (200.74s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.38s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 7.777458ms
addons_test.go:913: volcano-controller stabilized in 7.808583ms
addons_test.go:897: volcano-scheduler stabilized in 7.8485ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-zdq4d" [9bcad7aa-3628-434f-960c-b9031d062cb1] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005697083s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-r8nwp" [d6e24185-ff72-4f9e-9791-803614aa1366] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0024255s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-zgt59" [bfdc16ea-f66c-483a-afdb-6595d396b958] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005574375s
addons_test.go:932: (dbg) Run:  kubectl --context addons-970000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-970000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-970000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [33bb8a6b-b344-4a71-bcdc-afe827fc2c9a] Pending
helpers_test.go:344: "test-job-nginx-0" [33bb8a6b-b344-4a71-bcdc-afe827fc2c9a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [33bb8a6b-b344-4a71-bcdc-afe827fc2c9a] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.009882208s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-970000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-970000 addons disable volcano --alsologtostderr -v=1: (10.123462292s)
--- PASS: TestAddons/serial/Volcano (38.38s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-970000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-970000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-970000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-970000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-970000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ff340fc6-c4bd-4ccd-b670-eaec1844d3f3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ff340fc6-c4bd-4ccd-b670-eaec1844d3f3] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003733542s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-970000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-970000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-970000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-970000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-970000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-970000 addons disable ingress --alsologtostderr -v=1: (7.28277775s)
--- PASS: TestAddons/parallel/Ingress (18.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.38s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ghz42" [d1f1f896-446b-4ecb-b3be-9fce7b2c33e1] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.008456291s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-970000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-970000: (5.371765583s)
--- PASS: TestAddons/parallel/InspektorGadget (10.38s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.299458ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-b5mqw" [e8ef34f0-6061-446a-99f1-31ce7bcb791c] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010412709s
addons_test.go:417: (dbg) Run:  kubectl --context addons-970000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-970000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.7855ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-970000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-970000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d965314e-eede-4638-80ed-f48c477a19c4] Pending
helpers_test.go:344: "task-pv-pod" [d965314e-eede-4638-80ed-f48c477a19c4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d965314e-eede-4638-80ed-f48c477a19c4] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.006742709s
addons_test.go:590: (dbg) Run:  kubectl --context addons-970000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-970000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-970000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-970000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-970000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-970000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-970000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d7c628af-d80c-4353-8ec6-39a62950e23f] Pending
helpers_test.go:344: "task-pv-pod-restore" [d7c628af-d80c-4353-8ec6-39a62950e23f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d7c628af-d80c-4353-8ec6-39a62950e23f] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.010119291s
addons_test.go:632: (dbg) Run:  kubectl --context addons-970000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-970000 delete pod task-pv-pod-restore: (1.148776166s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-970000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-970000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-970000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-970000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.13869825s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-970000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (56.47s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-970000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-8rt9w" [f74b6339-49df-4124-b126-605939edafec] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-8rt9w" [f74b6339-49df-4124-b126-605939edafec] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.008710583s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-970000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-970000 addons disable headlamp --alsologtostderr -v=1: (5.259672333s)
--- PASS: TestAddons/parallel/Headlamp (16.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-n5nxb" [4900e523-63b6-4831-bdcc-3d935c15dfc1] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010217416s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-970000
--- PASS: TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.95s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-970000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-970000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [25474c27-7183-4e75-9f83-36839a2c85cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [25474c27-7183-4e75-9f83-36839a2c85cf] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [25474c27-7183-4e75-9f83-36839a2c85cf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004085s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-970000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-970000 ssh "cat /opt/local-path-provisioner/pvc-f6372382-4b6a-4318-8b3d-4cb347c50492_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-970000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-970000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-970000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-970000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.485081917s)
--- PASS: TestAddons/parallel/LocalPath (53.95s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.19s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4rl79" [e1603688-ffe4-4f9b-bdad-e827397c39d5] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008817916s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-970000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.19s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-tghdh" [49a6d5c8-23ab-42f7-acb5-d043f43af5c6] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005268875s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-970000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-970000 addons disable yakd --alsologtostderr -v=1: (5.270903167s)
--- PASS: TestAddons/parallel/Yakd (10.28s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-970000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-970000: (12.212066459s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-970000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-970000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-970000
--- PASS: TestAddons/StoppedEnableDisable (12.41s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.94s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.94s)

                                                
                                    
x
+
TestErrorSpam/setup (34.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-774000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-774000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 --driver=qemu2 : (34.912925167s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (34.91s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 pause
--- PASS: TestErrorSpam/pause (0.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 unpause
--- PASS: TestErrorSpam/unpause (0.62s)

                                                
                                    
x
+
TestErrorSpam/stop (64.31s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 stop: (12.209948208s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 stop: (26.064931666s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-774000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-774000 stop: (26.035183084s)
--- PASS: TestErrorSpam/stop (64.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19575-1140/.minikube/files/etc/test/nested/copy/1661/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.09s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-143000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-143000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m17.092369041s)
--- PASS: TestFunctional/serial/StartWithProxy (77.09s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.4s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-143000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-143000 --alsologtostderr -v=8: (37.399657542s)
functional_test.go:663: soft start took 37.4000355s for "functional-143000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.40s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-143000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-143000 cache add registry.k8s.io/pause:3.1: (1.045664666s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-143000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local1373213328/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 cache add minikube-local-cache-test:functional-143000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 cache delete minikube-local-cache-test:functional-143000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-143000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-143000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (68.466417ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 kubectl -- --context functional-143000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.82s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-143000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-143000 get pods: (1.006580666s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-143000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0904 12:43:27.791610    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:43:27.799811    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:43:27.813199    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:43:27.836587    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:43:27.880003    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:43:27.963460    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:43:28.126822    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:43:28.450220    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:43:29.093981    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:43:30.377499    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:43:32.940787    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:43:38.062744    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:43:48.310667    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-143000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.366280875s)
functional_test.go:761: restart took 38.366360542s for "functional-143000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.37s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-143000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1779139234/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-143000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-143000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-143000: exit status 115 (151.237542ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31225 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-143000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-143000 config get cpus: exit status 14 (29.546708ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-143000 config get cpus: exit status 14 (29.691792ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-143000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-143000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2711: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.55s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-143000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-143000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (117.756042ms)

                                                
                                                
-- stdout --
	* [functional-143000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 12:44:37.728623    2691 out.go:345] Setting OutFile to fd 1 ...
	I0904 12:44:37.728752    2691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:44:37.728763    2691 out.go:358] Setting ErrFile to fd 2...
	I0904 12:44:37.728766    2691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:44:37.728927    2691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 12:44:37.729973    2691 out.go:352] Setting JSON to false
	I0904 12:44:37.746304    2691 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2641,"bootTime":1725476436,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 12:44:37.746377    2691 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 12:44:37.750691    2691 out.go:177] * [functional-143000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0904 12:44:37.758591    2691 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 12:44:37.758632    2691 notify.go:220] Checking for updates...
	I0904 12:44:37.765729    2691 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 12:44:37.767176    2691 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 12:44:37.770701    2691 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 12:44:37.773684    2691 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 12:44:37.779714    2691 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 12:44:37.783009    2691 config.go:182] Loaded profile config "functional-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 12:44:37.783259    2691 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 12:44:37.787710    2691 out.go:177] * Using the qemu2 driver based on existing profile
	I0904 12:44:37.794685    2691 start.go:297] selected driver: qemu2
	I0904 12:44:37.794694    2691 start.go:901] validating driver "qemu2" against &{Name:functional-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-143000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 12:44:37.794744    2691 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 12:44:37.800730    2691 out.go:201] 
	W0904 12:44:37.804630    2691 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0904 12:44:37.807686    2691 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-143000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-143000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-143000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (117.718166ms)

                                                
                                                
-- stdout --
	* [functional-143000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 12:44:37.604468    2687 out.go:345] Setting OutFile to fd 1 ...
	I0904 12:44:37.604563    2687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:44:37.604566    2687 out.go:358] Setting ErrFile to fd 2...
	I0904 12:44:37.604568    2687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 12:44:37.604696    2687 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
	I0904 12:44:37.606159    2687 out.go:352] Setting JSON to false
	I0904 12:44:37.624212    2687 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2641,"bootTime":1725476436,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0904 12:44:37.624298    2687 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0904 12:44:37.629779    2687 out.go:177] * [functional-143000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0904 12:44:37.637731    2687 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 12:44:37.637768    2687 notify.go:220] Checking for updates...
	I0904 12:44:37.645680    2687 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	I0904 12:44:37.648704    2687 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0904 12:44:37.650111    2687 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 12:44:37.653662    2687 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	I0904 12:44:37.656711    2687 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 12:44:37.660022    2687 config.go:182] Loaded profile config "functional-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 12:44:37.660277    2687 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 12:44:37.664644    2687 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0904 12:44:37.671633    2687 start.go:297] selected driver: qemu2
	I0904 12:44:37.671640    2687 start.go:901] validating driver "qemu2" against &{Name:functional-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-143000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 12:44:37.671691    2687 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 12:44:37.678649    2687 out.go:201] 
	W0904 12:44:37.686385    2687 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0904 12:44:37.689677    2687 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [bdebf24d-d0d7-4bbf-b31b-7c1a8caac405] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010432667s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-143000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-143000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-143000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-143000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6e9d0fa3-1739-4069-8bef-93e3b5c43caf] Pending
helpers_test.go:344: "sp-pod" [6e9d0fa3-1739-4069-8bef-93e3b5c43caf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6e9d0fa3-1739-4069-8bef-93e3b5c43caf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.009453458s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-143000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-143000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-143000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4e1b6634-acf2-431c-8d98-54550f25aed1] Pending
helpers_test.go:344: "sp-pod" [4e1b6634-acf2-431c-8d98-54550f25aed1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4e1b6634-acf2-431c-8d98-54550f25aed1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.010869625s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-143000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.94s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh -n functional-143000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 cp functional-143000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd4256439572/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh -n functional-143000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh -n functional-143000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1661/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "sudo cat /etc/test/nested/copy/1661/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1661.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "sudo cat /etc/ssl/certs/1661.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1661.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "sudo cat /usr/share/ca-certificates/1661.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/16612.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "sudo cat /etc/ssl/certs/16612.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/16612.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "sudo cat /usr/share/ca-certificates/16612.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-143000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-143000 ssh "sudo systemctl is-active crio": exit status 1 (63.631125ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-143000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-143000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-143000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2548: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-143000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-143000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-143000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [123cc109-cb8b-4bd8-8286-e7a5003492f5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [123cc109-cb8b-4bd8-8286-e7a5003492f5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004452208s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-143000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.193.119 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-143000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-143000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-143000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-x4fnh" [18033e89-e3c0-4ea2-9ec5-0b766b4306ff] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-x4fnh" [18033e89-e3c0-4ea2-9ec5-0b766b4306ff] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.008770084s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 service list -o json
functional_test.go:1494: Took "288.8965ms" to run "out/minikube-darwin-arm64 -p functional-143000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:32316
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:32316
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "85.558667ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "35.70475ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "83.967333ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "32.806375ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-143000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port139397152/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725479069190839000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port139397152/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725479069190839000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port139397152/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725479069190839000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port139397152/001/test-1725479069190839000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-143000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (57.794375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-143000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (87.863667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  4 19:44 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  4 19:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  4 19:44 test-1725479069190839000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh cat /mount-9p/test-1725479069190839000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-143000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b93d6ab4-d69e-431c-ab21-e3a1b54273a9] Pending
helpers_test.go:344: "busybox-mount" [b93d6ab4-d69e-431c-ab21-e3a1b54273a9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b93d6ab4-d69e-431c-ab21-e3a1b54273a9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b93d6ab4-d69e-431c-ab21-e3a1b54273a9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004639958s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-143000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-143000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port139397152/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-143000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2654515076/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-143000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (65.643333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-143000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2654515076/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-143000 ssh "sudo umount -f /mount-9p": exit status 1 (63.593333ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-143000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-143000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2654515076/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-143000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup651616618/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-143000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup651616618/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-143000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup651616618/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-143000 ssh "findmnt -T" /mount1: exit status 1 (81.992458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-143000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-143000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup651616618/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-143000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup651616618/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-143000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup651616618/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-143000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-143000
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-143000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-143000 image ls --format short --alsologtostderr:
I0904 12:44:44.920516    2793 out.go:345] Setting OutFile to fd 1 ...
I0904 12:44:44.920693    2793 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 12:44:44.920697    2793 out.go:358] Setting ErrFile to fd 2...
I0904 12:44:44.920699    2793 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 12:44:44.920828    2793 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
I0904 12:44:44.921275    2793 config.go:182] Loaded profile config "functional-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 12:44:44.921340    2793 config.go:182] Loaded profile config "functional-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 12:44:44.922117    2793 ssh_runner.go:195] Run: systemctl --version
I0904 12:44:44.922124    2793 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/functional-143000/id_rsa Username:docker}
I0904 12:44:44.947546    2793 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-143000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-143000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-143000 | d0bd712de9e6b | 30B    |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/nginx                     | alpine            | 70594c812316a | 47MB   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | latest            | a9dfdba8b7190 | 193MB  |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-143000 image ls --format table --alsologtostderr:
I0904 12:44:45.137563    2805 out.go:345] Setting OutFile to fd 1 ...
I0904 12:44:45.137739    2805 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 12:44:45.137742    2805 out.go:358] Setting ErrFile to fd 2...
I0904 12:44:45.137744    2805 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 12:44:45.137901    2805 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
I0904 12:44:45.138320    2805 config.go:182] Loaded profile config "functional-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 12:44:45.138382    2805 config.go:182] Loaded profile config "functional-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 12:44:45.139260    2805 ssh_runner.go:195] Run: systemctl --version
I0904 12:44:45.139268    2805 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/functional-143000/id_rsa Username:docker}
I0904 12:44:45.168386    2805 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-143000 image ls --format json --alsologtostderr:
[{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","rep
oDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-143000"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"d0bd712de9e6bcca44cbee60b493fbba02f23c8694d3b0d724a7d74d8c72be03","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-143000"],"size":"30"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubern
etesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-143000 image ls --format json --alsologtostderr:
I0904 12:44:44.994352    2800 out.go:345] Setting OutFile to fd 1 ...
I0904 12:44:44.994485    2800 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 12:44:44.994488    2800 out.go:358] Setting ErrFile to fd 2...
I0904 12:44:44.994491    2800 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 12:44:44.994609    2800 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
I0904 12:44:44.995040    2800 config.go:182] Loaded profile config "functional-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 12:44:44.995105    2800 config.go:182] Loaded profile config "functional-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 12:44:44.995957    2800 ssh_runner.go:195] Run: systemctl --version
I0904 12:44:44.995969    2800 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/functional-143000/id_rsa Username:docker}
I0904 12:44:45.090999    2800 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-143000 image ls --format yaml --alsologtostderr:
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-143000
size: "4780000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: d0bd712de9e6bcca44cbee60b493fbba02f23c8694d3b0d724a7d74d8c72be03
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-143000
size: "30"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-143000 image ls --format yaml --alsologtostderr:
I0904 12:44:45.221565    2807 out.go:345] Setting OutFile to fd 1 ...
I0904 12:44:45.221716    2807 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 12:44:45.221720    2807 out.go:358] Setting ErrFile to fd 2...
I0904 12:44:45.221722    2807 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 12:44:45.221864    2807 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
I0904 12:44:45.222296    2807 config.go:182] Loaded profile config "functional-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 12:44:45.222361    2807 config.go:182] Loaded profile config "functional-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 12:44:45.223294    2807 ssh_runner.go:195] Run: systemctl --version
I0904 12:44:45.223304    2807 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/functional-143000/id_rsa Username:docker}
I0904 12:44:45.251743    2807 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-143000 ssh pgrep buildkitd: exit status 1 (64.529333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image build -t localhost/my-image:functional-143000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-143000 image build -t localhost/my-image:functional-143000 testdata/build --alsologtostderr: (1.606533708s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-143000 image build -t localhost/my-image:functional-143000 testdata/build --alsologtostderr:
I0904 12:44:45.360211    2811 out.go:345] Setting OutFile to fd 1 ...
I0904 12:44:45.360418    2811 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 12:44:45.360422    2811 out.go:358] Setting ErrFile to fd 2...
I0904 12:44:45.360424    2811 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 12:44:45.360546    2811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19575-1140/.minikube/bin
I0904 12:44:45.360995    2811 config.go:182] Loaded profile config "functional-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 12:44:45.361695    2811 config.go:182] Loaded profile config "functional-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 12:44:45.362453    2811 ssh_runner.go:195] Run: systemctl --version
I0904 12:44:45.362461    2811 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19575-1140/.minikube/machines/functional-143000/id_rsa Username:docker}
I0904 12:44:45.387910    2811 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.1962048561.tar
I0904 12:44:45.387968    2811 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0904 12:44:45.391621    2811 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1962048561.tar
I0904 12:44:45.393172    2811 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1962048561.tar: stat -c "%s %y" /var/lib/minikube/build/build.1962048561.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1962048561.tar': No such file or directory
I0904 12:44:45.393187    2811 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.1962048561.tar --> /var/lib/minikube/build/build.1962048561.tar (3072 bytes)
I0904 12:44:45.403453    2811 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1962048561
I0904 12:44:45.407769    2811 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1962048561 -xf /var/lib/minikube/build/build.1962048561.tar
I0904 12:44:45.411424    2811 docker.go:360] Building image: /var/lib/minikube/build/build.1962048561
I0904 12:44:45.411478    2811 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-143000 /var/lib/minikube/build/build.1962048561
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:574163e03278adc611d693ec1c40a057f06be9c47a6b76f9be7a8631a1e0ab15 done
#8 naming to localhost/my-image:functional-143000 done
#8 DONE 0.1s
I0904 12:44:46.925276    2811 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-143000 /var/lib/minikube/build/build.1962048561: (1.513789375s)
I0904 12:44:46.925353    2811 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1962048561
I0904 12:44:46.929020    2811 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1962048561.tar
I0904 12:44:46.932169    2811 build_images.go:217] Built localhost/my-image:functional-143000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.1962048561.tar
I0904 12:44:46.932184    2811 build_images.go:133] succeeded building to: functional-143000
I0904 12:44:46.932187    2811 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.728792541s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-143000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image load --daemon kicbase/echo-server:functional-143000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image load --daemon kicbase/echo-server:functional-143000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-143000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image load --daemon kicbase/echo-server:functional-143000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image save kicbase/echo-server:functional-143000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image rm kicbase/echo-server:functional-143000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-143000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 image save --daemon kicbase/echo-server:functional-143000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-143000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-143000 docker-env) && out/minikube-darwin-arm64 status -p functional-143000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-143000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 update-context --alsologtostderr -v=2
2024/09/04 12:44:47 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-143000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-143000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-143000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-143000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (181.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-789000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0904 12:44:49.763569    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:46:11.685711    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-789000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m1.337571542s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (181.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-789000 -- rollout status deployment/busybox: (3.047811709s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- exec busybox-7dff88458-8hj88 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- exec busybox-7dff88458-n4dbv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- exec busybox-7dff88458-vp6kx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- exec busybox-7dff88458-8hj88 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- exec busybox-7dff88458-n4dbv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- exec busybox-7dff88458-vp6kx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- exec busybox-7dff88458-8hj88 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- exec busybox-7dff88458-n4dbv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- exec busybox-7dff88458-vp6kx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- exec busybox-7dff88458-8hj88 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- exec busybox-7dff88458-8hj88 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- exec busybox-7dff88458-n4dbv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- exec busybox-7dff88458-n4dbv -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- exec busybox-7dff88458-vp6kx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-789000 -- exec busybox-7dff88458-vp6kx -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-789000 -v=7 --alsologtostderr
E0904 12:48:27.796368    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-789000 -v=7 --alsologtostderr: (54.858177708s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-789000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp testdata/cp-test.txt ha-789000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp ha-789000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3662623890/001/cp-test_ha-789000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp ha-789000:/home/docker/cp-test.txt ha-789000-m02:/home/docker/cp-test_ha-789000_ha-789000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m02 "sudo cat /home/docker/cp-test_ha-789000_ha-789000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp ha-789000:/home/docker/cp-test.txt ha-789000-m03:/home/docker/cp-test_ha-789000_ha-789000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m03 "sudo cat /home/docker/cp-test_ha-789000_ha-789000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp ha-789000:/home/docker/cp-test.txt ha-789000-m04:/home/docker/cp-test_ha-789000_ha-789000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m04 "sudo cat /home/docker/cp-test_ha-789000_ha-789000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp testdata/cp-test.txt ha-789000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp ha-789000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3662623890/001/cp-test_ha-789000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp ha-789000-m02:/home/docker/cp-test.txt ha-789000:/home/docker/cp-test_ha-789000-m02_ha-789000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000 "sudo cat /home/docker/cp-test_ha-789000-m02_ha-789000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp ha-789000-m02:/home/docker/cp-test.txt ha-789000-m03:/home/docker/cp-test_ha-789000-m02_ha-789000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m03 "sudo cat /home/docker/cp-test_ha-789000-m02_ha-789000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp ha-789000-m02:/home/docker/cp-test.txt ha-789000-m04:/home/docker/cp-test_ha-789000-m02_ha-789000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m04 "sudo cat /home/docker/cp-test_ha-789000-m02_ha-789000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp testdata/cp-test.txt ha-789000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp ha-789000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3662623890/001/cp-test_ha-789000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp ha-789000-m03:/home/docker/cp-test.txt ha-789000:/home/docker/cp-test_ha-789000-m03_ha-789000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000 "sudo cat /home/docker/cp-test_ha-789000-m03_ha-789000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp ha-789000-m03:/home/docker/cp-test.txt ha-789000-m02:/home/docker/cp-test_ha-789000-m03_ha-789000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m02 "sudo cat /home/docker/cp-test_ha-789000-m03_ha-789000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp ha-789000-m03:/home/docker/cp-test.txt ha-789000-m04:/home/docker/cp-test_ha-789000-m03_ha-789000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m04 "sudo cat /home/docker/cp-test_ha-789000-m03_ha-789000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp testdata/cp-test.txt ha-789000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp ha-789000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3662623890/001/cp-test_ha-789000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp ha-789000-m04:/home/docker/cp-test.txt ha-789000:/home/docker/cp-test_ha-789000-m04_ha-789000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000 "sudo cat /home/docker/cp-test_ha-789000-m04_ha-789000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp ha-789000-m04:/home/docker/cp-test.txt ha-789000-m02:/home/docker/cp-test_ha-789000-m04_ha-789000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m02 "sudo cat /home/docker/cp-test_ha-789000-m04_ha-789000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 cp ha-789000-m04:/home/docker/cp-test.txt ha-789000-m03:/home/docker/cp-test_ha-789000-m04_ha-789000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-789000 ssh -n ha-789000-m03 "sudo cat /home/docker/cp-test_ha-789000-m04_ha-789000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0904 12:58:27.784554    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/addons-970000/client.crt: no such file or directory" logger="UnhandledError"
E0904 12:58:55.869245    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.868960208s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-470000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-470000 --output=json --user=testUser: (1.8852795s)
--- PASS: TestJSONOutput/stop/Command (1.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-862000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-862000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.96425ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"33a4b7df-e171-419f-8202-92b2de8050ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-862000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f35286b9-7445-4c9b-912a-60ee48b9e791","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19575"}}
	{"specversion":"1.0","id":"ba72eaec-5023-4620-bd3b-c14118b71f58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig"}}
	{"specversion":"1.0","id":"fcbee201-ac12-47e9-99ab-d6f6f27cec4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"94c798cb-3882-4b7b-ab83-878423058af7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4bf0f5c4-8788-4546-ac68-dc12395a4c1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube"}}
	{"specversion":"1.0","id":"7b412b1c-04e0-47ec-ae7b-8867ee0a2548","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e9ea9fdf-1343-415d-84f2-d0b47573620a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-862000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-862000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-388000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-388000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (95.884875ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-388000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19575-1140/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19575-1140/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-388000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-388000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.475ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-388000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-388000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.643248583s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
E0904 13:21:58.953633    1661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19575-1140/.minikube/profiles/functional-143000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.727542375s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-388000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-388000: (1.8494135s)
--- PASS: TestNoKubernetes/serial/Stop (1.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-388000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-388000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (38.648917ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-388000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-388000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-175000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-455000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-455000 --alsologtostderr -v=3: (3.314264708s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-455000 -n old-k8s-version-455000: exit status 7 (51.151459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-455000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-393000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-393000 --alsologtostderr -v=3: (3.313845709s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-393000 -n no-preload-393000: exit status 7 (45.358875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-393000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-727000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-727000 --alsologtostderr -v=3: (3.077839s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-727000 -n embed-certs-727000: exit status 7 (39.783875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-727000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-227000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-227000 --alsologtostderr -v=3: (2.101392208s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-227000 -n default-k8s-diff-port-227000: exit status 7 (59.217417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-227000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-509000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-509000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-509000 --alsologtostderr -v=3: (3.376216125s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-509000 -n newest-cni-509000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-509000 -n newest-cni-509000: exit status 7 (58.108792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-509000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-134000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-134000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-134000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-134000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-134000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-134000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-134000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-134000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-134000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-134000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-134000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-134000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-134000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-134000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-134000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-134000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-134000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-134000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-134000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-134000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-134000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-134000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-134000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-134000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-134000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-134000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-134000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-134000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-134000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-134000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-134000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-134000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-134000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-134000"

                                                
                                                
----------------------- debugLogs end: cilium-134000 [took: 2.195985s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-134000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-134000
--- SKIP: TestNetworkPlugins/group/cilium (2.30s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-359000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-359000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard